- Source: History of macroeconomic thought
Macroeconomic theory has its origins in the study of business cycles and monetary theory. In general, early theorists believed monetary factors could not affect real factors such as real output. John Maynard Keynes attacked some of these "classical" theories and produced a general theory that described the whole economy in terms of aggregates rather than individual, microeconomic parts. Attempting to explain unemployment and recessions, he noticed the tendency for people and businesses to hoard cash and avoid investment during a recession. He argued that this invalidated the assumptions of classical economists who thought that markets always clear, leaving no surplus of goods and no willing labor left idle.
The generation of economists that followed Keynes synthesized his theory with neoclassical microeconomics to form the neoclassical synthesis. Although Keynesian theory originally omitted an explanation of price levels and inflation, later Keynesians adopted the Phillips curve to model price-level changes. Some Keynesians opposed the synthesis method of combining Keynes's theory with an equilibrium system and advocated disequilibrium models instead. Monetarists, led by Milton Friedman, adopted some Keynesian ideas, such as the importance of the demand for money, but argued that Keynesians ignored the role of money supply in inflation. Robert Lucas and other new classical macroeconomists criticized Keynesian models that did not work under rational expectations. Lucas also argued that Keynesian empirical models would not be as stable as models based on microeconomic foundations.
The new classical school culminated in real business cycle theory (RBC). Like early classical economic models, RBC models assumed that markets clear and that business cycles are driven by changes in technology and supply, not demand. New Keynesians tried to address many of the criticisms leveled by Lucas and other new classical economists against Neo-Keynesians. New Keynesians adopted rational expectations and built models with microfoundations of sticky prices that suggested recessions could still be explained by demand factors because rigidities stop prices from falling to a market-clearing level, leaving a surplus of goods and labor. The new neoclassical synthesis combined elements of both new classical and new Keynesian macroeconomics into a consensus. Other economists avoided the new classical and new Keynesian debate on short-term dynamics and developed the new growth theories of long-run economic growth. The Great Recession led to a retrospective on the state of the field and some popular attention turned toward heterodox economics.
Origins
Macroeconomics descends from two areas of research: business cycle theory and monetary theory. Monetary theory dates back to the 16th century and the work of Martín de Azpilcueta, while business cycle analysis dates from the mid 19th.
= Business cycle theory
=Beginning with William Stanley Jevons and Clément Juglar in the 1860s, economists attempted to explain the cycles of frequent, violent shifts in economic activity. A key milestone in this endeavor was the foundation of the U.S. National Bureau of Economic Research by Wesley Mitchell in 1920. This marked the beginning of a boom in atheoretical, statistical models of economic fluctuation (models based on cycles and trends instead of economic theory) that led to the discovery of apparently regular economic patterns like the Kuznets wave.
Other economists focused more on theory in their business cycle analysis. Most business cycle theories focused on a single factor, such as monetary policy or the impact of weather on the largely agricultural economies of the time. Although business cycle theory was well established by the 1920s, work by theorists such as Dennis Robertson and Ralph Hawtrey had little impact on public policy. Their partial equilibrium theories could not capture general equilibrium, where markets interact with each other; in particular, early business cycle theories treated goods markets and financial markets separately. Research in these areas used microeconomic methods to explain employment, price level, and interest rates.
= Monetary theory
=Initially, the relationship between price level and output was explained by the quantity theory of money; David Hume had presented such a theory in his 1752 work Of Money (Essays, Moral, Political, and Literary, Part II, Essay III). Quantity theory viewed the entire economy through Say's law, which stated that whatever is supplied to the market will be sold—in short, that markets always clear. In this view, money is neutral and cannot impact the real factors in an economy like output levels. This was consistent with the classical dichotomy view that real aspects of the economy and nominal factors, such as price levels and money supply, can be considered independent from one another. For example, adding more money to an economy would be expected only to raise prices, not to create more goods.
The quantity theory of money dominated macroeconomic theory until the 1930s. Two versions were particularly influential, one developed by Irving Fisher in works that included his 1911 The Purchasing Power of Money and another by Cambridge economists over the course of the early 20th century. Fisher's version of the quantity theory can be expressed by holding money velocity (the frequency with which a given piece of currency is used in transactions) (V) and real income (Q) constant and allowing money supply (M) and the price level (P) to vary in the equation of exchange:
M
⋅
V
=
P
⋅
Q
{\displaystyle M\cdot V=P\cdot Q}
Most classical theories, including Fisher's, held that velocity was stable and independent of economic activity. Cambridge economists, such as John Maynard Keynes, began to challenge this assumption. They developed the Cambridge cash-balance theory, which looked at money demand and how it impacted the economy. The Cambridge theory did not assume that money demand and supply were always at equilibrium, and it accounted for people holding more cash when the economy sagged. By factoring in the value of holding cash, the Cambridge economists took significant steps toward the concept of liquidity preference that Keynes would later develop. Cambridge theory argued that people hold money for two reasons: to facilitate transactions and to maintain liquidity. In later work, Keynes added a third motive, speculation, to his liquidity preference theory and built on it to create his general theory.
In 1898, Knut Wicksell proposed a monetary theory centered on interest rates. His analysis used two rates: the market interest rate, determined by the banking system, and the real or "natural" interest rate, determined by the rate of return on capital. In Wicksell's theory, cumulative inflation will occur when technical innovation causes the natural rate to rise or when the banking system allows the market rate to fall. Cumulative deflation occurs under the opposite conditions causing the market rate to rise above the natural. Wicksell's theory did not produce a direct relationship between the quantity of money and price level. According to Wicksell, money would be created endogenously, without an increase in quantity of hard currency, as long as the natural exceeded the market interest rate . In these conditions, borrowers turn a profit and deposit cash into bank reserves, which expands money supply. This can lead to a cumulative process where inflation increases continuously without an expansion in the monetary base. Wicksell's work influenced Keynes and the Swedish economists of the Stockholm School.
Keynes's General Theory
Modern macroeconomics can be said to have begun with Keynes and the publication of his book The General Theory of Employment, Interest and Money in 1936. Keynes expanded on the concept of liquidity preferences and built a general theory of how the economy worked. Keynes's theory brought together both monetary and real economic factors for the first time, explained unemployment, and suggested policy achieving economic stability.
Keynes contended that economic output is positively correlated with money velocity. He explained the relationship via changing liquidity preferences: people increase their money holdings during times of economic difficulty by reducing their spending, which further slows the economy. This paradox of thrift claimed that individual attempts to survive a downturn only worsen it. When the demand for money increases, money velocity slows. A slowdown in economic activities means markets might not clear, leaving excess goods to waste and capacity to idle. Turning the quantity theory on its head, Keynes argued that market changes shift quantities rather than prices. Keynes replaced the assumption of stable velocity with one of a fixed price-level. If spending falls and prices do not, the surplus of goods reduces the need for workers and increases unemployment.
Classical economists had difficulty explaining involuntary unemployment and recessions because they applied Say's Law to the labor market and expected that all those willing to work at the prevailing wage would be employed. In Keynes's model, employment and output are driven by aggregate demand, the sum of consumption and investment. Since consumption remains stable, most fluctuations in aggregate demand stem from investment, which is driven by many factors including expectations, "animal spirits", and interest rates. Keynes argued that fiscal policy could compensate for this volatility. During downturns, governments could increase spending to purchase excess goods and employ idle labor. Moreover, a multiplier effect increases the effect of this direct spending since newly employed workers would spend their income, which would percolate through the economy, while firms would invest to respond to this increase in demand.
Keynes's prescription for strong public investment had ties to his interest in uncertainty. Keynes had given a unique perspective on statistical inference in A Treatise on Probability, written in 1921, years before his major economic works. Keynes thought strong public investment and fiscal policy would counter the negative impacts the uncertainty of economic fluctuations can have on the economy. While Keynes's successors paid little attention to the probabilistic parts of his work, uncertainty may have played a central part in the investment and liquidity-preference aspects of General Theory.
The exact meaning of Keynes's work has been long debated. Even the interpretation of Keynes's policy prescription for unemployment, one of the more explicit parts of General Theory, has been the subject of debates. Economists and scholars debate whether Keynes intended his advice to be a major policy shift to address a serious problem or a moderately conservative solution to deal with a minor issue.
Keynes's successors
Keynes's successors debated the exact formulations, mechanisms, and consequences of the Keynesian model. One group emerged representing the "orthodox" interpretation of Keynes; They combined classical microeconomics with Keynesian thought to produce the "neoclassical synthesis" that dominated economics from the 1940s until the early 1970s. Two camps of Keynesians were critical of this synthesis interpretation of Keynes. One group focused on the disequilibrium aspects of Keynes's work, while the other took a fundamentalist stance on Keynes and began the heterodox post-Keynesian tradition.
= Neoclassical synthesis
=The generation of economists that followed Keynes, the neo-Keynesians, created the "neoclassical synthesis" by combining Keynes's macroeconomics with neoclassical microeconomics. Neo-Keynesians dealt with two microeconomic issues: first, providing foundations for aspects of Keynesian theory such as consumption and investment, and, second, combining Keynesian macroeconomics with general equilibrium theory. (In general equilibrium theory, individual markets interact with one another and an equilibrium price exists if there is perfect competition, no externalities, and perfect information.) Paul Samuelson's Foundations of Economic Analysis (1947) provided much of the microeconomic basis for the synthesis. Samuelson's work set the pattern for the methodology used by neo-Keynesians: economic theories expressed in formal, mathematical models. While Keynes's theories prevailed in this period, his successors largely abandoned his informal methodology in favor of Samuelson's.
By the mid-1950s, the vast majority of economists had ceased debating Keynesianism and accepted the synthesis view; however, room for disagreement remained. The synthesis attributed problems with market clearing to sticky prices that failed to adjust to changes in supply and demand. Another group of Keynesians focused on disequilibrium economics and tried to reconcile the concept of equilibrium with the absence of market clearing.
= Neo-Keynesian models
=In 1937 John Hicks published an article that incorporated Keynes's thought into a general equilibrium framework where the markets for goods and money met in an overall equilibrium. Hick's IS/LM (Investment-Savings/Liquidity preference-Money supply) model became the basis for decades of theorizing and policy analysis into the 1960s. The model represents the goods market with the IS curve, a set of points representing equilibrium in investment and savings. The money market equilibrium is represented with the LM curve, a set of points representing the equilibrium in supply and demand for money. The intersection of the curves identifies an aggregate equilibrium in the economy where there are unique equilibrium values for interest rates and economic output. The IS/LM model focused on interest rates as the "monetary transmission mechanism," the channel through which money supply affects real variables like aggregate demand and employment. A decrease in money supply would lead to higher interest rates, which reduce investment and thereby lower output throughout the economy. Other economists built on the IS/LM framework. Notably, in 1944, Franco Modigliani added a labor market. Modigliani's model represented the economy as a system with general equilibrium across the interconnected markets for labor, finance, and goods, and it explained unemployment with rigid nominal wages.
Growth had been of interest to 18th-century classical economists like Adam Smith, but work tapered off during the 19th and early 20th century marginalist revolution when researchers focused on microeconomics. The study of growth revived when neo-Keynesians Roy Harrod and Evsey Domar independently developed the Harrod–Domar model, an extension of Keynes's theory to the long-run, an area Keynes had not looked at himself. Their models combined Keynes's multiplier with an accelerator model of investment, and produced the simple result that growth equaled the savings rate divided by the capital output ratio (the amount of capital divided by the amount of output). The Harrod–Domar model dominated growth theory until Robert Solow and Trevor Swan independently developed neoclassical growth models in 1956. Solow and Swan produced a more empirically appealing model with "balanced growth" based on the substitution of labor and capital in production. Solow and Swan suggested that increased savings could only temporarily increase growth, and only technological improvements could increase growth in the long-run. After Solow and Swan, growth research tapered off with little or no research on growth from 1970 until 1985.
Economists incorporated the theoretical work from the synthesis into large-scale macroeconometric models that combined individual equations for factors such as consumption, investment, and money demand with empirically observed data. This line of research reached its height with the MIT-Penn-Social Science Research Council (MPS) model developed by Modigliani and his collaborators. MPS combined IS/LM with other aspects of the synthesis including the neoclassical growth model and the Phillips curve relation between inflation and output. Both large-scale models and the Phillips curve became targets for critics of the synthesis.
= Phillips curve
=Keynes did not lay out an explicit theory of price level. Early Keynesian models assumed wage and other price levels were fixed. These assumptions caused little concern in the 1950s when inflation was stable, but by the mid-1960s inflation increased and became an issue for macroeconomic models. In 1958 A.W. Phillips set the basis for a price level theory when he made the empirical observation that inflation and unemployment seemed to be inversely related. In 1960 Richard Lipsey provided the first theoretical explanation of this correlation. Generally Keynesian explanations of the curve held that excess demand drove high inflation and low unemployment while an output gap raised unemployment and depressed prices. In the late 1960s and early 1970s, the Phillips curve faced attacks on both empirical and theoretical fronts. The presumed trade-off between output and inflation represented by the curve was the weakest part of the Keynesian system.
= Disequilibrium macroeconomics
=Despite its prevalence, the neoclassical synthesis had its Keynesian critics. A strain of disequilibrium or "non-Walrasian" theory developed that criticized the synthesis for apparent contradictions in allowing disequilibrium phenomena, especially involuntary unemployment, to be modeled in equilibrium models. Moreover, they argued, the presence of disequilibrium in one market must be associated with disequilibrium in another, so involuntary unemployment had to be tied to an excess supply in the goods market. Many see Don Patinkin's work as the first in the disequilibrium vein. Robert W. Clower (1965) introduced his "dual-decision hypothesis" that a person in a market may determine what he wants to buy, but is ultimately limited in how much he can buy based on how much he can sell. Clower and Axel Leijonhufvud (1968) argued that disequilibrium formed a fundamental part of Keynes's theory and deserved greater attention. Robert Barro and Herschel Grossman formulated general disequilibrium models in which individual markets were locked into prices before there was a general equilibrium. These markets produced "false prices" resulting in disequilibrium. Soon after the work of Barro and Grossman, disequilibrium models fell out of favor in the United States, and Barro abandoned Keynesianism and adopted new classical, market clearing hypotheses.
While American economists quickly abandoned disequilibrium models, European economists were more open to models without market clearing. Europeans such as Edmond Malinvaud and Jacques Drèze expanded on the disequilibrium tradition and worked to explain price rigidity instead of simply assuming it. Malinvaud (1977) used disequilibrium analysis to develop a theory of unemployment. He argued that disequilibrium in the labor and goods markets could lead to rationing of goods and labor, leading to unemployment. Malinvaud adopted a fixprice framework and argued that pricing would be rigid in modern, industrial prices compared to the relatively flexible pricing systems of raw goods that dominate agricultural economies. Prices are fixed and only quantities adjust. Malinvaud considers an equilibrium state in classical and Keynesian unemployment as most likely. Work in the neoclassical tradition is confined as a special case of Malinvaud's typology, the Walrasian equilibrium. In Malinvaud's theory, reaching the Walrasian equilibrium case is almost impossible to achieve given the nature of industrial pricing.
Monetarism
Milton Friedman developed an alternative to Keynesian macroeconomics eventually labeled monetarism. Generally monetarism is the idea that the supply of money matters for the macroeconomy. When monetarism emerged in the 1950s and 1960s, Keynesians neglected the role money played in inflation and the business cycle, and monetarism directly challenged those points.
= Criticizing and augmenting the Phillips curve
=The Phillips curve appeared to reflect a clear, inverse relationship between inflation and output. The curve broke down in the 1970s as economies suffered simultaneous economic stagnation and inflation known as stagflation. The empirical implosion of the Phillips curve followed attacks mounted on theoretical grounds by Friedman and Edmund Phelps. Phelps, although not a monetarist, argued that only unexpected inflation or deflation impacted employment. Variations of Phelps's "expectations-augmented Phillips curve" became standard tools. Friedman and Phelps used models with no long-run trade-off between inflation and unemployment. Instead of the Phillips curve they used models based on the natural rate of unemployment where expansionary monetary policy can only temporarily shift unemployment below the natural rate. Eventually, firms will adjust their prices and wages for inflation based on real factors, ignoring nominal changes from monetary policy. The expansionary boost will be wiped out.
= Importance of money
=Anna Schwartz collaborated with Friedman to produce one of monetarism's major works, A Monetary History of the United States (1963), which linked money supply to the business cycle. The Keynesians of the 1950s and 60s had adopted the view that monetary policy does not impact aggregate output or the business cycle based on evidence that, during the Great Depression, interest rates had been extremely low but output remained depressed. Friedman and Schwartz argued that Keynesians only looked at nominal rates and neglected the role inflation plays in real interest rates, which had been high during much of the Depression. In real terms, monetary policy had effectively been contractionary, putting downward pressure on output and employment, even though economists looking only at nominal rates thought monetary policy had been stimulative.
Friedman developed his own quantity theory of money that referred to Irving Fisher's but inherited much from Keynes. Friedman's 1956 "The Quantity Theory of Money: A Restatement" incorporated Keynes's demand for money and liquidity preference into an equation similar to the classical equation of exchange. Friedman's updated quantity theory also allowed for the possibility of using monetary or fiscal policy to remedy a major downturn. Friedman broke with Keynes by arguing that money demand is relatively stable—even during a downturn. Monetarists argued that "fine-tuning" through fiscal and monetary policy is counterproductive. They found money demand to be stable even during fiscal policy shifts, and both fiscal and monetary policies suffer from lags that made them too slow to prevent mild downturns.
= Prominence and decline
=Monetarism attracted the attention of policy makers in the late-1970s and 1980s. Friedman and Phelps's version of the Phillips curve performed better during stagflation and gave monetarism a boost in credibility. By the mid-1970s monetarism had become the new orthodoxy in macroeconomics, and by the late-1970s central banks in the United Kingdom and United States had largely adopted a monetarist policy of targeting money supply instead of interest rates when setting policy. However, targeting monetary aggregates proved difficult for central banks because of measurement difficulties. Monetarism faced a major test when Paul Volcker took over the Federal Reserve Chairmanship in 1979. Volcker tightened the money supply and brought inflation down, creating a severe recession in the process. The recession lessened monetarism's popularity but clearly demonstrated the importance of money supply in the economy. Monetarism became less credible when once-stable money velocity defied monetarist predictions and began to move erratically in the United States during the early 1980s. Monetarist methods of single-equation models and non-statistical analysis of plotted data also lost out to the simultaneous-equation modeling favored by Keynesians. Monetarism's policies and method of analysis lost influence among central bankers and academics, but its core tenets of the long-run neutrality of money (increases in money supply cannot have long-term effects on real variables, such as output) and use of monetary policy for stabilization became a part of the macroeconomic mainstream even among Keynesians.
New classical economics
"New classical economics" evolved from monetarism and presented other challenges to Keynesianism. Early new classicals considered themselves monetarists, but the new classical school evolved. New classicals abandoned the monetarist belief that monetary policy could systematically impact the economy, and eventually embraced real business cycle models that ignored monetary factors entirely.
New classicals broke with Keynesian economic theory completely while monetarists had built on Keynesian ideas. Despite discarding Keynesian theory, new classical economists did share the Keynesian focus on explaining short-run fluctuations. New classicals replaced monetarists as the primary opponents to Keynesianism and changed the primary debate in macroeconomics from whether to look at short-run fluctuations to whether macroeconomic models should be grounded in microeconomic theories. Like monetarism, new classical economics was rooted at the University of Chicago, principally with Robert Lucas. Other leaders in the development of new classical economics include Edward Prescott at University of Minnesota and Robert Barro at University of Rochester.
New classical economists wrote that earlier macroeconomic theory was based only tenuously on microeconomic theory and described its efforts as providing "microeconomic foundations for macroeconomics." New classicals also introduced rational expectations and argued that governments had little ability to stabilize the economy given the rational expectations of economic agents. Most controversially, new classical economists revived the market clearing assumption, assuming both that prices are flexible and that the market should be modeled at equilibrium.
= Rational expectations and policy irrelevance
=Keynesians and monetarists recognized that people based their economic decisions on expectations about the future. However, until the 1970s, most models relied on adaptive expectations, which assumed that expectations were based on an average of past trends. For example, if inflation averaged 4% over a period, economic agents were assumed to expect 4% inflation the following year. In 1972 Lucas, influenced by a 1961 agricultural economics paper by John Muth, introduced rational expectations to macroeconomics. Essentially, adaptive expectations modeled behavior as if it were backward-looking while rational expectations modeled economic agents (consumers, producers and investors) who were forward-looking. New classical economists also claimed that an economic model would be internally inconsistent if it assumed that the agents it models behave as if they were unaware of the model. Under the assumption of rational expectations, models assume agents make predictions based on the optimal forecasts of the model itself. This did not imply that people have perfect foresight, but that they act with an informed understanding of economic theory and policy.
Thomas Sargent and Neil Wallace (1975) applied rational expectations to models with Phillips curve trade-offs between inflation and output and found that monetary policy could not be used to systematically stabilize the economy. Sargent and Wallace's policy ineffectiveness proposition found that economic agents would anticipate inflation and adjust to higher price levels before the influx of monetary stimulus could boost employment and output. Only unanticipated monetary policy could increase employment, and no central bank could systematically use monetary policy for expansion without economic agents catching on and anticipating price changes before they could have a stimulative impact.
Robert E. Hall applied rational expectations to Friedman's permanent income hypothesis that people base the level of their current spending on their wealth and lifetime income rather than current income. Hall found that people will smooth their consumption over time and only alter their consumption patterns when their expectations about future income change. Both Hall's and Friedman's versions of the permanent income hypothesis challenged the Keynesian view that short-term stabilization policies like tax cuts can stimulate the economy. The permanent income view suggests that consumers base their spending on wealth, so a temporary boost in income would only produce a moderate increase in consumption. Empirical tests of Hall's hypothesis suggest it may understate boosts in consumption due to income increases; however, Hall's work helped to popularize Euler equation models of consumption.
= The Lucas critique and microfoundations
=In 1976 Lucas wrote a paper criticizing large-scale Keynesian models used for forecasting and policy evaluation. Lucas argued that economic models based on empirical relationships between variables are unstable as policies change: a relationship under one policy regime may be invalid after the regime changes. The Lucas's critique went further and argued that a policy's impact is determined by how the policy alters the expectations of economic agents. No model is stable unless it accounts for expectations and how expectations relate to policy. New classical economists argued that abandoning the disequilibrium models of Keynesianism and focusing on structure- and behavior-based equilibrium models would remedy these faults. Keynesian economists responded by building models with microfoundations grounded in stable theoretical relationships.
= Lucas supply theory and business cycle models
=Lucas and Leonard Rapping laid out the first new classical approach to aggregate supply in 1969. Under their model, changes in employment are based on worker preferences for leisure time. Lucas and Rapping modeled decreases in employment as voluntary choices of workers to reduce their work effort in response to the prevailing wage.
Lucas (1973) proposed a business cycle theory based on rational expectations, imperfect information, and market clearing. While building this model, Lucas attempted to incorporate the empirical fact that there had been a trade-off between inflation and output without ceding that money was non-neutral in the short-run. This model included the idea of money surprise: monetary policy only matters when it causes people to be surprised or confused by the price of goods changing relative to one another. Lucas hypothesized that producers become aware of changes in their own industries before they recognize changes in other industries. Given this assumption, a producer might perceive an increase in general price level as an increase in the demand for his goods. The producer responds by increasing production only to find the "surprise" that prices had increased across the economy generally rather than specifically for his goods. This "Lucas supply curve" models output as a function of the "price" or "money surprise," the difference between expected and actual inflation. Lucas's "surprise" business cycle theory fell out of favor after the 1970s when empirical evidence failed to support this model.
= Real business cycle theory
=While "money surprise" models floundered, efforts continued to develop a new classical model of the business cycle. A 1982 paper by Kydland and Prescott introduced real business cycle theory (RBC). Under this theory business cycles could be explained entirely by the supply side, and models represented the economy with systems at constant equilibrium. RBC dismissed the need to explain business cycles with price surprise, market failure, price stickiness, uncertainty, and instability. Instead, Kydland and Prescott built parsimonious models that explained business cycles with changes in technology and productivity. Employment levels changed because these technological and productivity changes altered the desire of people to work. RBC rejected the idea of high involuntary unemployment in recessions and not only dismissed the idea that money could stabilize the economy but also the monetarist idea that money could destabilize it.
Real business cycle modelers sought to build macroeconomic models based on microfoundations of Arrow–Debreu general equilibrium. RBC models were one of the inspirations for dynamic stochastic general equilibrium (DSGE) models. DSGE models have become a common methodological tool for macroeconomists—even those who disagree with new classical theory.
New Keynesian economics
New classical economics had pointed out the inherent contradiction of the neoclassical synthesis: Walrasian microeconomics with market clearing and general equilibrium could not lead to Keynesian macroeconomics where markets failed to clear. New Keynesians recognized this paradox, but, while the new classicals abandoned Keynes, new Keynesians abandoned Walras and market clearing.
During the late 1970s and 1980s, new Keynesian researchers investigated how market imperfections like monopolistic competition, nominal frictions like sticky prices, and other frictions made microeconomics consistent with Keynesian macroeconomics. New Keynesians often formulated models with rational expectations, which had been proposed by Lucas and adopted by new classical economists.
= Nominal and real rigidities
=Stanley Fischer (1977) responded to Thomas J. Sargent and Neil Wallace's monetary ineffectiveness proposition and showed how monetary policy could stabilize an economy even in a model with rational expectations. Fischer's model showed how monetary policy could have an impact in a model with long-term nominal wage contracts. John B. Taylor expanded on Fischer's work and found that monetary policy could have long-lasting effects—even after wages and prices had adjusted. Taylor arrived at this result by building on Fischer's model with the assumptions of staggered contract negotiations and contracts that fixed nominal prices and wage rates for extended periods. These early new Keynesian theories were based on the basic idea that, given fixed nominal wages, a monetary authority (central bank) can control the employment rate. Since wages are fixed at a nominal rate, the monetary authority can control the real wage (wage values adjusted for inflation) by changing the money supply and thus impact the employment rate.
By the 1980s new Keynesian economists became dissatisfied with these early nominal wage contract models since they predicted that real wages would be countercyclical (real wages would rise when the economy fell), while empirical evidence showed that real wages tended to be independent of economic cycles or even slightly procyclical. These contract models also did not make sense from a microeconomic standpoint since it was unclear why firms would use long-term contracts if they led to inefficiencies. Instead of looking for rigidities in the labor market, new Keynesians shifted their attention to the goods market and the sticky prices that resulted from "menu cost" models of price change. The term refers to the literal cost to a restaurant of printing new menus when it wants to change prices; however, economists also use it to refer to more general costs associated with changing prices, including the expense of evaluating whether to make the change. Since firms must spend money to change prices, they do not always adjust them to the point where markets clear, and this lack of price adjustments can explain why the economy may be in disequilibrium. Studies using data from the United States Consumer Price Index confirmed that prices do tend to be sticky. A good's price typically changes about every four to six months or, if sales are excluded, every eight to eleven months.
While some studies suggested that menu costs are too small to have much of an aggregate impact, Laurence Ball and David Romer (1990) showed that real rigidities could interact with nominal rigidities to create significant disequilibrium. Real rigidities occur whenever a firm is slow to adjust its real prices in response to a changing economic environment. For example, a firm can face real rigidities if it has market power or if its costs for inputs and wages are locked-in by a contract. Ball and Romer argued that real rigidities in the labor market keep a firm's costs high, which makes firms hesitant to cut prices and lose revenue. The expense created by real rigidities combined with the menu cost of changing prices makes it less likely that firm will cut prices to a market clearing level.
= Coordination failure
=Coordination failure is another potential explanation for recessions and unemployment. In recessions a factory can go idle even though there are people willing to work in it, and people willing to buy its production if they had jobs. In such a scenario, economic downturns appear to be the result of coordination failure: The invisible hand fails to coordinate the usual, optimal, flow of production and consumption. Russell Cooper and Andrew John (1988) expressed a general form of coordination as models with multiple equilibria where agents could coordinate to improve (or at least not harm) each of their respective situations. Cooper and John based their work on earlier models including Peter Diamond's (1982) coconut model, which demonstrated a case of coordination failure involving search and matching theory. In Diamond's model producers are more likely to produce if they see others producing. The increase in possible trading partners increases the likelihood of a given producer finding someone to trade with. As in other cases of coordination failure, Diamond's model has multiple equilibria, and the welfare of one agent is dependent on the decisions of others. Diamond's model is an example of a "thick-market externality" that causes markets to function better when more people and firms participate in them. Other potential sources of coordination failure include self-fulfilling prophecies. If a firm anticipates a fall in demand, they might cut back on hiring. A lack of job vacancies might worry workers who then cut back on their consumption. This fall in demand meets the firm's expectations, but it is entirely due to the firm's own actions.
= Labor market failures
=New Keynesians offered explanations for the failure of the labor market to clear. In a Walrasian market, unemployed workers bid down wages until the demand for workers meets the supply. If markets are Walrasian, the ranks of the unemployed would be limited to workers transitioning between jobs and workers who choose not to work because wages are too low to attract them. They developed several theories explaining why markets might leave willing workers unemployed. Of these theories, new Keynesians were especially associated with efficiency wages and the insider-outsider model used to explain long-term effects of previous unemployment, where short-term increases in unemployment become permanent and lead to higher levels of unemployment in the long-run.
Insider-outsider model
Economists became interested in hysteresis when unemployment levels spiked with the 1979 oil shock and early 1980s recessions but did not return to the lower levels that had been considered the natural rate. Olivier Blanchard and Lawrence Summers (1986) explained hysteresis in unemployment with insider-outsider models, which were also proposed by Assar Lindbeck and Dennis Snower in a series of papers and then a book. Insiders, employees already working at a firm, are only concerned about their own welfare. They would rather keep their wages high than cut pay and expand employment. The unemployed, outsiders, do not have any voice in the wage bargaining process, so their interests are not represented. When unemployment increases, the number of outsiders increases as well. Even after the economy has recovered, outsiders continue to be disenfranchised from the bargaining process. The larger pool of outsiders created by periods of economic retraction can lead to persistently higher levels of unemployment. The presence of hysteresis in the labor market also raises the importance of monetary and fiscal policy. If temporary downturns in the economy can create long term increases in unemployment, stabilization policies do more than provide temporary relief; they prevent short term shocks from becoming long term increases in unemployment.
Efficiency wages
In efficiency wage models, workers are paid at levels that maximize productivity instead of clearing the market. For example, in developing countries, firms might pay more than a market rate to ensure their workers can afford enough nutrition to be productive. Firms might also pay higher wages to increase loyalty and morale, possibly leading to better productivity. Firms can also pay higher than market wages to forestall shirking. Shirking models were particularly influential. Carl Shapiro and Joseph Stiglitz (1984) created a model where employees tend to avoid work unless firms can monitor worker effort and threaten slacking employees with unemployment. If the economy is at full employment, a fired shirker simply moves to a new job. Individual firms pay their workers a premium over the market rate to ensure their workers would rather work and keep their current job instead of shirking and risk having to move to a new job. Since each firm pays more than market clearing wages, the aggregated labor market fails to clear. This creates a pool of unemployed laborers and adds to the expense of getting fired. Workers not only risk a lower wage, they risk being stuck in the pool of unemployed. Keeping wages above market clearing levels creates a serious disincentive to shirk that makes workers more efficient even though it leaves some willing workers unemployed.
New growth theory
Following research on the neoclassical growth model in the 1950s and 1960s, little work on economic growth occurred until 1985. Papers by Paul Romer were particularly influential in igniting the revival of growth research. Beginning in the mid-1980s and booming in the early 1990s many macroeconomists shifted their focus to the long-run and started "new growth" theories, including endogenous growth. Growth economists sought to explain empirical facts including the failure of sub-Saharan Africa to catch up in growth, the booming East Asian Tigers, and the slowdown in productivity growth in the United States prior to the technology boom of the 1990s. Convergence in growth rates had been predicted under the neoclassical growth model, and this apparent predictive failure inspired research into endogenous growth.
Three families of new growth models challenged neoclassical models. The first challenged the assumption of previous models that the economic benefits of capital would decrease over time. These early new growth models incorporated positive externalities to capital accumulation where one firm's investment in technology generates spillover benefits to other firms because knowledge spreads. The second focused on the role of innovation in growth. These models focused on the need to encourage innovation through patents and other incentives. A third set, referred to as the "neoclassical revival", expanded the definition of capital in exogenous growth theory to include human capital. This strain of research began with Mankiw, Romer, and Weil (1992), which showed that 78% of the cross-country variance in growth could be explained by a Solow model augmented with human capital.
Endogenous growth theories implied that countries could experience rapid "catch-up" growth through an open society that encouraged the inflow of technology and ideas from other nations. Endogenous growth theory also suggested that governments should intervene to encourage investment in research and development because the private sector might not invest at optimal levels.
New synthesis
A "new synthesis" or "new neoclassical synthesis" emerged in the 1990s drawing ideas from both the new Keynesian and new classical schools. From the new classical school, it adapted RBC hypotheses, including rational expectations, and methods; from the new Keynesian school, it took nominal rigidities (price stickiness) and other market imperfections. The new synthesis implies that monetary policy can have a stabilizing effect on the economy, contrary to new classical theory. The new synthesis was adopted by academic economists and soon by policy makers, such as central bankers.
Under the synthesis, debates have become less ideological (concerning fundamental methodological questions) and more empirical. Woodford described the change:
It sometimes appears to outsiders that macroeconomists are deeply divided over issues of empirical methodology. There continue to be, and probably will always be, heated disagreements about the degree to which individual empirical claims are convincing. A variety of empirical methods are used, both for data characterization and for estimation of structural relations, and researchers differ in their taste for specific methods, often depending on their willingness to employ methods that involve more specific a priori assumptions. But the existence of such debates should not conceal the broad agreement on more basic issues of method. Both “calibrationists” and the practitioners of Bayesian estimation of DSGE models agree on the importance of doing “quantitative theory,” both accept the importance of the distinction between pure data characterization and the validation of structural models, and both have a similar understanding of the form of model that can properly be regarded as structural.
Woodford emphasized that there was now a stronger distinction between works of data characterization, which make no claims regarding their results' relationship to specific economic decisions, and structural models, where a model with a theoretical basis attempts describe actual relationships and decisions being made by economic actors. The validation of structural models now requires that their specifications reflect "explicit decision problems faced by households or firms". Data characterization, Woodford says, proves useful in "establishing facts structural models should be expected to explain" but not as a tool of policy analysis. Rather it is structural models, explaining those facts in terms of real-life decisions by agents, that form the basis of policy analysis.
New synthesis theory developed RBC models called dynamic stochastic general equilibrium (DSGE) models, which avoid the Lucas critique. DSGE models formulate hypotheses about the behaviors and preferences of firms and households; numerical solutions of the resulting DSGE models are computed. These models also included a "stochastic" element created by shocks to the economy. In the original RBC models these shocks were limited to technological change, but more recent models have incorporated other real changes. Econometric analysis of DSGE models suggested that real factors sometimes affect the economy. A paper by Frank Smets and Rafael Woulters (2007) stated that monetary policy explained only a small part of the fluctuations in economic output. In new synthesis models, shocks can affect both demand and supply.
More recent developments in new synthesis modeling has included the development of heterogeneous agent models, used in monetary policy optimization: these models examine the implications of having distinct groups of consumers with different savings behavior within a population on the transmission of monetary policy through an economy.
2008 financial crisis, Great Recession, and the evolution of consensus
The 2007–2008 financial crisis and subsequent Great Recession challenged the short-term macroeconomics of the time. Few economists predicted the crisis, and, even afterwards, there was great disagreement on how to address it. The new synthesis formed during the Great Moderation and had not been tested in a severe economic environment. Many economists agree that the crisis stemmed from an economic bubble, but neither of the major macroeconomic schools within the synthesis had paid much attention to finance or a theory of asset bubbles. The failures of macroeconomic theory at the time to explain the crisis spurred macroeconomists to re-evaluate their thinking. Commentary ridiculed the mainstream and proposed a major reassessment.
Particular criticism during the crisis was directed at DSGE models, which were developed prior to and during the new synthesis. Robert Solow testified before the U.S. Congress that DSGE modeling "has nothing useful to say about anti-recession policy because it has built into its essentially implausible assumptions the 'conclusion' that there is nothing for macroeconomic policy to do." Solow also criticized DSGE models for frequently assuming that a single, "representative agent" can represent the complex interaction of the many diverse agents that make up the real world. Robert Gordon criticized much of macroeconomics after 1978. Gordon called for a renewal of disequilibrium theorizing and disequilibrium modeling. He disparaged both new classical and new Keynesian economists who assumed that markets clear; he called for a renewal of economic models that could included both market clearing and sticky-priced goods, such as oil and housing respectively.
The crisis of confidence in DSGE models did not dismantle the deeper consensus that characterizes the new synthesis, and models which could explain the new data continued development. Areas that had seen increased popular and political attention, such as income inequality, received greater focus, as did models which incorporated significant heterogeneity (as opposed to earlier DSGE models). Whilst criticizing DSGE models, Ricardo J. Caballero argued that work in finance showed progress and suggested that modern macroeconomics needed to be re-centered but not scrapped in the wake of the financial crisis. In 2010, Federal Reserve Bank of Minneapolis president Narayana Kocherlakota acknowledged that DSGE models were "not very useful" for analyzing the financial crisis of 2007–2010, but argued that the applicability of these models was "improving" and claimed that there was a growing consensus among macroeconomists that DSGE models need to incorporate both "price stickiness and financial market frictions." Despite his criticism of DSGE modeling, he stated that modern models are useful:
In the early 2000s, ...[the] problem of fit disappeared for modern macro models with sticky prices. Using novel Bayesian estimation methods, Frank Smets and Raf Wouters demonstrated that a sufficiently rich New Keynesian model could fit European data well. Their finding, along with similar work by other economists, has led to widespread adoption of New Keynesian models for policy analysis and forecasting by central banks around the world.University of Minnesota professor of economics V.V. Chari said in 2010 that the most advanced DSGE models allowed for significant heterogeneity in behavior and decisions, from factors such as age, prior experiences and available information. Alongside such improvements in DSGE modeling, work has also included the development of heterogeneous-agent models of more specific aspects of the economy, such as monetary policy transmission.
Environmental issues
From the 21st century onwards, the concept of ecosystem services (the benefits to humans provided by the natural environment and from healthy ecosystems) are more widely studied in economics. Also climate change is more widely acknowledged as a major issue in economics, sparking debates about sustainable development in economics. Climate change has also become a factor in the policy of for example the European Central Bank.
Also the field of ecological economics became more popular in the 21st century. In their macroeconomic models, the economic system is a subsystem of the environment. In this model, the circular flow of income diagram is replaced in ecological economics by a more complex flow diagram reflecting the input of solar energy, which sustains natural inputs and environmental services which are then used as units of production. Once consumed, natural inputs pass out of the economy as pollution and waste. The potential of an environment to provide services and materials is referred to as an "environment's source function", and this function is depleted as resources are consumed or pollution contaminates the resources. The "sink function" describes an environment's ability to absorb and render harmless waste and pollution: when waste output exceeds the limit of the sink function, long-term damage occurs.: 8
Another example of a model in ecological economics is the doughnut model from economist Kate Raworth. This macroeconomic model includes planetary boundaries, like climate change into its model. These macroeconomic models from ecological economics, although more popular, are not fully accepted by mainstream economic thinking.
Heterodox theories
Heterodox economists adhere to theories sufficiently outside the mainstream to be marginalized and treated as irrelevant by the establishment. Initially, heterodox economists including Joan Robinson, worked alongside mainstream economists, but heterodox groups isolated themselves and created insular groups in the late 1960s and 1970s. Present day heterodox economists often publish in their own journals rather than those of the mainstream and eschew formal modeling in favor of more abstract theoretical work.
According to The Economist, the 2008 financial crisis and subsequent recession highlighted limitations of the macroeconomic theories, models, and econometrics of the time. The popular press during the period discussed post-Keynesian economics and Austrian economics, two heterodox traditions that have little influence on mainstream economics.
= Post Keynesian economics
=While neo-Keynesians integrated Keynes's ideas with neoclassical theory, post-Keynesians went in other directions. Post-Keynesians opposed the neoclassical synthesis and shared a fundamentalist interpretation of Keynes that sought to develop economic theories without classical elements. The core of post-Keynesian belief is the rejection of three axioms that are central to classical and mainstream Keynesian views: the neutrality of money, gross substitution, and the ergodic axiom. Post-Keynesians not only reject the neutrality of money in the short-run, they also see money as an important factor in the long-run, a view other Keynesians dropped in the 1970s. Gross substitution implies that goods are interchangeable. Relative price changes cause people to shift their consumption in proportion to the change. The ergodic axiom asserts that the future of the economy can be predicted based on the past and present market conditions. Without the ergodic assumption, agents are unable to form rational expectations, undermining new classical theory. In a non-ergodic economy, predictions are very hard to make and decision-making is hampered by uncertainty. Partly because of uncertainty, post-Keynesians take a different stance on sticky prices and wages than new Keynesians. They do not see nominal rigidities as an explanation for the failure of markets to clear. They instead think sticky prices and long-term contracts anchor expectations and alleviate uncertainty that hinders efficient markets. Post Keynesian economic policies emphasize the need to reduce uncertainty in the economy including safety nets and price stability. Hyman Minsky applied post-Keynesian notions of uncertainty and instability to a theory of financial crisis where investors increasingly take on debt until their returns can no longer pay the interest on leveraged assets, resulting in a financial crisis. The financial crisis of 2007–2008 brought mainstream attention to Minsky's work.
= Austrian business cycle theory
=The Austrian School of economics began with Carl Menger's 1871 Principles of Economics. Menger's followers formed a distinct group of economists until around World War II, when the distinction between Austrian economics and other schools of thought had largely broken down. The Austrian tradition survived as a distinct school, however, through the works of Ludwig von Mises and Friedrich Hayek. Present-day Austrians are distinguished by their interest in earlier Austrian works and abstention from standard empirical methodology including econometrics. Austrians also focus on market processes instead of equilibrium. Mainstream economists are generally critical of its methodology.
Hayek created the Austrian business cycle theory, which synthesizes Menger's capital theory and Mises's theory of money and credit. The theory proposes a model of inter-temporal investment in which production plans precede the manufacture of the finished product. The producers revise production plans to adapt to changes in consumer preferences. Producers respond to "derived demand," which is estimated demand for the future, instead of current demand. If consumers reduce their spending, producers believe that consumers are saving for additional spending later, so that production remains constant. Combined with a market of loanable funds (which relates savings and investment through the interest rate), this theory of capital production leads to a model of the macroeconomy where markets reflect inter-temporal preferences. Hayek's model suggests that an economic bubble begins when cheap credit initiates a boom where resources are misallocated, so that early stages of production receive more resources than they should and overproduction begins; the later stages of capital are not funded for maintenance to prevent depreciation. Overproduction in the early stages cannot be processed by poorly maintained later stage capital. The boom becomes a bust when a lack of finished goods leads to "forced saving" since fewer finished goods can be produced for sale.
Notes
Citations
References
Further reading
= Articles
=de Vroey, Michel (2004). "The History of Macroeconomics Viewed against the Background of the Marshall-Walras Divide". History of Political Economy. 36: 57–91. doi:10.1215/00182702-36-suppl_1-57. hdl:2078.1/5852. S2CID 12513044.
= Books
=Handbooks in Economics
Taylor, John B.; Woodford, Michael, eds. (1999). Handbook of macroeconomics. Handbooks in Economics. Vol. 1–3. North-Holland. ISBN 978-0-444-50156-1.
Handbook of Monetary Economics, Elsevier.
Friedman, Benjamin M., and Frank H. Hahn, ed., 1990. v. 1 links for description & contents and chapter-outline previews
_____, 1990. v. 2 links for description & contents and chapter-outline previews.
Friedman, Benjamin, and Michael Woodford, 2010. v. 3A & 3B links for description & and chapter abstracts.
Leijonhufvud, Axel (1981). Information and coordination : essays in macroeconomic theory. New York: Oxford University Press. ISBN 978-0-19-502815-7.
Woodford, Michael (2003). Interest and prices: Foundations of a theory of monetary policy. Princeton, New Jersey: Princeton University Press. ISBN 978-0-691-01049-6.
External links
Articles at IDEAS (Internet Documents in Economics Access Service) classified as "History of Economic Thought since 1925: Macroeconomics"
Database of macroeconomic models
= Podcasts and videos
=Related Nobel Prize lecture videos and other material
Thomas Sargent and Chris Sims (2011) "Empirical research on cause and effect in the macroeconomy"
Peter Diamond, Dale Mortensen, and Christopher Pissarides (2010) "Analysis of markets with search frictions"
Edmund Phelps (2006) "Analysis of intertemporal tradeoffs in macroeconomic policy"
Finn E. Kydland and Edward C. Prescott (2004) "Dynamic macroeconomics: the time consistency of economic policy and the driving forces behind business cycles"
George Akerlof, Michael Spence, and Joseph Stiglitz (2001) "Analyses of markets with asymmetric information".
Institute for New Economic Thinking Conference Proceedings videos
Kata Kunci Pencarian:
- Globalisasi
- Turki
- Sejarah pemikiran ekonomi makro
- Ekonomi supply-side
- History of macroeconomic thought
- Macroeconomic model
- History of economic thought
- Neoclassical synthesis
- New classical macroeconomics
- Neoliberalism
- Carl Menger
- New neoclassical synthesis
- Macroeconomics
- Disequilibrium macroeconomics