Monday, August 31, 2015

Global Value Chains and Rethinking Production and Trade

Back in the 1920s and 1930s, manufacturing often meant enormous factories that tried to bring as many activities as possible under one roof--what economists call "vertical integration." However, the trend in recent decades has been toward "global value chains," in which production is not only divided up among a number of specialized facilities, but in additional those facilities are often located in different countries. João Amador and Filippo di Mauro have edited The Age of Global Value Chains: Maps and Policy Issues, a VoxEU.org ebook from the Centre for Economic Policy Research. The volume includes an introduction, 13 essays, and an appendix about data.

One classic example of a vertically integrated plant was the River Rouge plant run by Ford Motor Company in the 1920s and 1930s. Here's a description:

Located a few miles south of Detroit at the confluence of the Rouge and Detroit Rivers, the original Rouge complex was a mile-and-a-half wide and more than a mile long. The multiplex of 93 buildings totaled 15,767,708 square feet of floor area crisscrossed by 120 miles of conveyors.
There were ore docks, steel furnaces, coke ovens, rolling mills, glass furnaces and plate-glass rollers. Buildings included a tire-making plant, stamping plant, engine casting plant, frame and assembly plant, transmission plant, radiator plant, tool and die plant, and, at one time, even a paper mill. A massive power plant produced enough electricity to light a city the size of nearby Detroit, and a soybean conversion plant turned soybeans into plastic auto parts.
The Rouge had its own railroad with 100 miles of track and 16 locomotives. A scheduled bus network and 15 miles of paved roads kept everything and everyone on the move.
It was a city without residents. At its peak in the 1930s, more than 100,000 people worked at the Rouge. To accommodate them required a multi-station fire department, a modern police force, a fully staffed hospital and a maintenance crew 5,000 strong. One new car rolled off the line every 49 seconds. Each day, workers smelted more than 1,500 tons of iron and made 500 tons of glass, and every month 3,500 mop heads had to be replaced to keep the complex clean.
In the modern economy, global value chains are coming to define what international trade is all about. From the introduction by Amador and di Mauro:

Until the late 19th century, the production of goods was very much a local affair, with inputs, factors of productions and markets being at only a marginal distance from one another. It was only after the ‘steam revolution’ that railroads and steamships started to be used for the transportation of goods, making the sale of excess production to other geographical areas feasible and profitable thanks to the exploitation of economies of scale. Baldwin (2006) refers to this as the first ‘unbundling’, i.e. the process that enabled production to be separated from consumption. ...

This transport revolution, while making trade cheaper and at the same time favouring large-scale production, led to the local clustering of production in factories and industrial areas. The geographical proximity of various stages of production made it easier to coordinate increasingly complex production processes and to minimise the associated coordination costs. Due to coordination costs, proximity was very important up until the mid-1980s. It was only then that the information and communication technology (ICT) revolution made it possible to reduce those costs by enabling complexity to be coordinated at a distance. Thanks to the sharp progress in ICT, not only could
consumption be separated from production, but production could also be broken up. The possibility of relocating the different stages of production theoretically enabled different tasks within a production process to be performed by geographically dispersed production units. This was termed the ‘second unbundling’ in international trade, leading to the sharing of production between developed and developing economies from the mid-1980s onwards. ...
The relocation of these stages of manufacturing to developing countries fostered high growth rates in emerging markets and was further enhanced by domestic policies aimed at attracting foreign capital. As a consequence, the ‘second unbundling’ reversed the previous industrialisation/non-industrialisation pattern prevalent in developed and developing countries. This change of fortunes represents one of the biggest economic transformations of the last decades and it reshaped, and will continue to shape, the balance of power in both international and economic relations. ...
The importance of GVCs has been steadily increasing in the last decades and, as reported in UNCTAD’s 2013 World Investment Report, about 60% of global trade consists of trade in intermediate goods and services, which are then incorporated at different stages of production.
The shift to global value chains raises an array of questions from basic conceptual and measurement and conceptual issues to domestic and international policy. At the most basic level, global value chains challenge the standard ways of even talking about international trade as what one country imports from another. But some substantial portion of what, say, the US imports from China was actually imported into China, used to produce output, and then exported out of China. To put it another way, exports from a given country in a world with global value chains don't measure what was actually produced in that country.

In studies of global value chains, a standard measure is to calculate what proportion of the value-added from a country's exports are actually from imported inputs. Here's a table showing how the share of foreign-value added in exports has generally been rising since 2000, from the essay by João Amador, Rita Cappariello, and Robert Stehrer:

Measuring trade in terms of value-added between countries, not in terms of the value of what is shipped, can alter one's view of trade flows. For example, the essay by Arne J. Nagengast and Robert Stehrer provides a figure showing differences in trade flows between certain pairs of countries using the standard import and export statistics, and using statistics on value-added in exports and imports. As one example, the US trade deficit with China looks smaller and the trade deficit with Japan looks larger--because inputs from Japan are being exported to China, used in production, and then shipped to the US. As the importance of global value chains rises, these divergences become bigger, too.




Another issue is whether global value chains are truly global, or whether instead there are groupings of international but regional value chains--sometimes labelled as Factory Asia, Factory North America, and Factory Europe. In their essay, Bart Los, Marcel Timmer and Gaaitzen de Vries
argued that while the regional dimension has been strong in the past, "‘Factory World’ is emerging." They write: "Our analysis shows that value chains have become considerably more global in nature since 2003. Increasing shares of the value of a product are added outside the region to which the country-of-completion belongs. Regional blocs like ‘Factory Europe’ are still important, but the construction of ‘Factory World’ is progressing rapidly."

The rise in global supply chains also has implications for countries looking for their niche in the global economy. The old-style approach was to focus on what your domestic chain of production and what your economy produced; the new-style approach is to focus on how your economy might fit into an international global value chain. The new emphasis means that connections to information and communication technology become even more important, because they are essential to managing far-flung production chains. Financial and legal institutions also matter more, because these sprawling production chains will require moving money and solving disputes in expeditious ways.

For those who would like even more of an entree into the academic research on global value chains than provided in this ebook, a useful starting point is a two-paper symposium on this topic in the Spring 2014 issue of the Journal of Economic Perspectives:
(Full disclosure: I've worked as Managing Editor of JEP since the first issue of the journal in 1987.)


Friday, August 28, 2015

Falling Labor Share: Measurement Issues and Candidate Explanations

It is a remarkable fact that the labor share of income in the United States hovered in the range of 60-65% of total income for 50 years--but has declined since 2000 and seems to be still falling. Roc Armenter explores how this figure is measured and some possible explanations for the recent change in "A Bit of a Miracle No More: The Decline of the Labor Share," published in the Business Review from the Philadelphia Federal Reserve (Third Quarter 2015, pp. 1-7).

Armenter reminds us that in the construction of income statistics, "Every dollar of income earned by U.S. households can be classified as either labor earnings — wages and other forms of compensation — or capital earnings — interest or dividend payments and rent." Here's the basic pattern of the falling share of labor income:

One possible explanation for this change involves alterations in  how the statistics for labor and capital income are calculated over time. The single biggest factor here appears to be a change in how the income of self-employed workers is treated. The official statistics divide up their income as if the person was working a certain number of hours for pay, which is counted as "labor income," while the rest of their income is "capital income" due to ownership of capital in the form of their business. But the way in which the Bureau of Labor Statistics does this division changed back in 2001. Armenter explains:
Indeed, until 2001, the BLS’s methodology assigned most of proprietor’s income to the labor share, a bit more than four-fifths of it. Since then, less than half of proprietor’s income has been classified as labor income. ... [A]t least one-third and possibly closer to half of the drop in the headline labor share is due to how the BLS treats proprietor’s income.
Before thinking about why the labor share has fallen, it's worth thinking about the remarkable fact that it didn't change for such a long time. After all, the period from 1950-2000 sees a rise in the share of workers in service industry jobs, along with enormous growth in industries like health care and financial services. Surely, all of this should be expected to alter the labor share of income in one way or another?

Back in 1939, John Maynard Keynes wrote an article (“Relative Movements of Real Wages and Output,” Economic Journal, 49: 34–51) pointing out that the division between labor and capital income appeared to have been the same for the preceding 20 years in the data available to him. He points out five separate factors that should have been affecting the shares of labor and capital over time, and notes that apparently the changes in these factors are almost exactly offsetting each other, He characterized these closely offsetting effects as  "a bit of a miracle"--a phrase that Armenter uses in the title of his article.  The minor miracle of a roughly stable labor share of income for several decades after 1950 arises from its own array of offsetting changes in industries and in labor share of industries,  Armenter explains (footnotes omitted):
The reader would not be surprised to learn that different sectors use labor and capital in different proportions. In 1950, the manufacturing sector averaged a labor share of 62 percent, with some subsectors having even higher labor shares, such as durable goods manufacturing, with a labor share of 77 percent. Services instead relied more on capital and thus had lower labor shares: an average of 48 percent. Thus, from 1950 to 1987, the sector with a high labor share (manufacturing) was cut in half, while the sector with a low labor share (services) doubled. The aggregate labor share is, naturally, the weighted average across these sectors. Therefore, we would have expected the aggregate labor share to fall. But as we already know, it did not. The reason is that, coincidentally with the shift from manufacturing to services, the labor share of the service sector rose sharply, from 48 percent in 1950 to 56 percent in 1987. Education and health services went from labor shares around 50 percent to the highest values in the whole economy, close to 84 percent. In manufacturing, the labor share was substantially more stable, increasing by less than 2 percentage points over the period. And this is the “bit of a miracle” — that the forces affecting the labor share across and within sectors just happened to cancel each other out over a period of almost half a century.
So what changed? The gradual shift from manufacturing to service jobs continued. The labor share within the service sector has continued increasing. The big change is that the labor share of manufacturing has fallen dramatically. More specifically, what seems to have happened is labor productivity in the manufacturing sector has kept rising, but wages in manufacturing have not kept pace. Here's Armenter:
We readily find out which part of the economy is behind the decline of the labor share once we look at the change in the labor share within manufacturing, which dropped almost 10 percentage points. Virtually all the major manufacturing subsectors saw their labor shares fall; for nondurable goods manufacturing it dropped from 62 percent to 40 percent. ... What ended the “miracle” was the precipitous decline in the labor share within manufacturing.

This divergence between manufacturing productivity and wages started back in the 1980s, which seems a little early to match the timeline of the decline in labor share starting in 2000. However, one can cobble together a story in which the decline in labor share started in the late 1980s, was interrupted for a time by the white-hot and unsustainable labor market conditions during the dot-com boom of the late 1990s, and then continued on its path of decline.

What caused the decline in labor share? Armenter runs through some possible explanations, while emphasizing that none of them are complete. For example, a "capital deepening" explanation holds that there is more capital in US manufacturing, which could explain lower income shares in manufacturing--but doesn't explain why wages in manufacturing stopped keeping up with productivity increases. A globalization explanation might help to explain this shift if the US was tending to import goods in industries that had high labor share and to export goods in industries with a lower labor share. But this factor doesn't seem able to explain the observed shift. as Armenter explains: "The main challenge to the hypothesis is that U.S. exports and imports are very similar in their factor composition. That is, were trade driving down the labor share, we would observe the U.S. importing goods that use a lot of labor and exporting goods that use a lot of capital. Instead, most international trade involves exchanging goods that are very similar, such as cars."

An explanation not explicitly considered by Armenter, although it is in the spirit of the other explanations, comes from the work of Susan Houseman. She argues that most of what looks like productivity growth in manufacturing isn't about workers actually producing more, but is because computers have ever-greater capabilities--which the government statisticians measure as productivity growth. She also argues that there is a shift within the manufacturing sector toward importing less expensive inputs to production, which looks like a gain in productivity (that is, fewer inputs needed to produce a given level of output), but is really just cheaper imports of production inputs.

In the spirit of half-full, half-empty analysis, I suppose the positive side of this analysis is that the drop in the share of US labor income is focused on only one part of the economy, and it's a part of the economy--manufacturing, which employs less than 10% of the US workforce (12.3 million jobs in manufacturing, compared to 148 million jobs in the entire US economy). Of course, the half-empty side is that no matter what the reason why the minor miracle of a fairly stable labor share has changed, it has in fact changed--and in a way that tends to disadvantage those who receive their income through labor.

For some earlier posts on the falling share of labor income, both from a US and an international point of view, see:



Thursday, August 27, 2015

States as the Laboratories of Democracy: An Historical Note

Perhaps the most famous metaphor defending the virtues of US federalism is that states can act as laboratories of democracy: that is,  states can enact a range of policies, and can then learn from the experiences of other states. The phrase was coined by Justice Louis Brandeis in the 1932 Supreme Court case of New State Ice Co. v. Liebmann (285 U.S. 262). Brandeis wrote: "It is one of the happy incidents of the federal system that a single courageous state may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country."

But there's a hearty dash of irony here. Brandeis, however admirable his sentiments about the states as laboratories of democracy, was writing in dissent. In the specific case, the state of Oklahoma had passed a law that required approval from a state-level Corporation Commission for anyone who wanted to start a firm that would make, distribute, or sell ice. The law required that all existing ice-related firms be granted approval by the Corporation Commission to continue functioning: it was only new firms that were required to appear before the Corporation Commission and to argue against the incumbent firms that they should be allowed to begin operations. Thus, the question before the Supreme Court was whether the states, as laboratories of democracy, might ban entry into certain market.

Justice George Sutherland, writing for the majority, argued that the Oklahoma law was unconstitutional under the "due process" clause of the 14th Amendment. Here's  a taste of Sutherland's argument, which argues that state-level experimentation could happen in many ways, but not in a way that stopped people from engaging in lawful business. Sutherland wrote:
"Plainly, a regulation which has the effect of denying or unreasonably curtailing the common right to engage in a lawful private business, such as that under review, cannot be upheld consistent with the Fourteenth Amendment. ...

"Stated succinctly, a private corporation here seeks to prevent a competitor from entering the business of making and selling ice. It claims to be endowed with state authority to achieve this exclusion. There is no question now before us of any regulation by the state to protect the consuming public either with respect to conditions of manufacture and distribution or to insure purity of product or to prevent extortion. The control here asserted does not protect against monopoly, but tends to foster it. The aim is not to encourage competition, but to prevent it; not to regulate the business, but to preclude persons from engaging in it. There is no difference in principle between this case and the attempt of the dairyman under state authority to prevent another from keeping cows and selling milk on the ground that there are enough dairymen in the business; or to prevent a shoemaker from making or selling shoes because shoemakers already in that occupation can make and sell all the shoes that are needed. We are not able to see anything peculiar in the business here in question which distinguishes it from ordinary manufacture and production. ... It is not the case of a natural monopoly, or of an enterprise in its nature dependent upon the grant of public privileges. The particular requirement before us was evidently not imposed to prevent a practical monopoly of the business, since its tendency is quite to the contrary. Nor is it a case of the protection of natural resources. There is nothing in the product that we can perceive on which to rest a distinction, in respect of this attempted control, from other products in common use which enter into free competition, subject, of course, to reasonable regulations prescribed for the protection of the public and applied with appropriate impartiality.
"And it is plain that unreasonable or arbitrary interference or restrictions cannot be saved from the condemnation of that amendment merely by calling them experimental. It is not necessary to challenge the authority of the states to indulge in experimental legislation; but it would be strange and unwarranted doctrine to hold that they may do so by enactments which transcend the limitations imposed upon them by the Federal Constitution."
In his dissent, Brandeis made several interlocking arguments. He argued that the ice business might have large economies of scale, in which case a few large firms could produce more cheaply than many small firms. In this setting, he argued that competition in the ice business could easily lead to a downward spiral of bankruptcies, and cited the past experience of railroads as a situation where capacity was overbuilt and mass bankruptcies occurred. He also argued that ice could be viewed as a "necessity of life" in Oklahoma. Here's a taste of the Brandeis argument (footnotes omitted):
In Oklahoma a regular supply of ice may reasonably be considered a necessary of life, comparable to that of water, gas, and electricity. The climate, which heightens the need of ice for comfortable and wholesome living, precludes resort to the natural product. There, as elsewhere, the development of the manufactured ice industry in recent years has been attended by deep-seated alterations in the economic structure and by radical changes in habits of popular thought and living. Ice has come to be regarded as a household necessity, indispensable to the preservation of food and so to economical household management and the maintenance of health. Its commercial uses are extensive. ... We cannot say that the Legislature of Oklahoma acted arbitrarily in declaring that ice is an article of primary necessity, in industry and agriculture as well as in the household, partaking of the fundamental character of electricity, gas, water, transportation, and communication. ...
The business of supplying ice is not only a necessity, like that of supplying food or clothing or shelter, but the Legislature could also consider that it is one which lends itself peculiarly to monopoly. Characteristically the business is conducted in local plants with a market narrowly limited in area, and this for the reason that ice manufactured at a distance cannot effectively compete with a plant on the ground. In small towns and rural communities the duplication of plants, and in larger communities the duplication of delivery service, is wasteful and ultimately burdensome to consumers. At the same time the relative ease and cheapness with which an ice plant may be constructed exposes the industry to destructive and frequently ruinous competition. Competition in the industry tends to be destructive because ice plants have a determinate capacity, and inflexible fixed charges and operating costs, and because in a market of limited area the volume of sales is not readily expanded. Thus, the erection of a new plant in a locality already adequately served often causes managers to go to extremes in cutting prices in order to secure business. Trade journals and reports of association meetings of ice manufacturers bear ample witness to the hostility of the industry to such competition, and to its unremitting efforts, through trade associations, informal agreements, combination of delivery systems, and in particular through the consolidation of plants, to protect markets and prices against competition of any character.
I'm not confident that Brandeis's economics is coherent. If it's true that large established firms in the ice industry have a huge cost advantage from economies of scale, then presumably they shouldn't have much to fear from smaller-scale competitors. In such a case, there might be an argument for regulating the price of ice as a monopoly. But smaller-scale competitors seeking to enter the industry would immediately face losses and have little chance of gaining market share. Industries looking for protection always claim that hobbling the competition would benefit consumers, and such claims were especially popular during the Great Depression, but there is ample reason to be skeptical of such self-interested claims.

But more broadly, an open society can be viewed as having a number of laboratories. States can be one of the laboratories of democracy, along with cities, and the opportunities for experimentation within federal laws. Private firms and new entrants are the laboratories for methods of production, workplace rules and compensation, and new technologies. Public schools can be viewed as laboratories of education, while colleges and universities are also laboratories of education, as well as research. Media and publications can be viewed as laboratories for shaping social opinions and decisions. Social structures like families, communities, and churches can be viewed as a series of laboratories for other changes in social relations. In a constitutional democracy, government should face some limits when it seeks to shut down society's other laboratories. 

Tuesday, August 25, 2015

John Kenneth Galbraith on Writing, Inspiration, and Simplicity

John Kenneth Galbraith (1908-2006) was trained as an economist, but in books like The Affluent Society (1958) and The New Industrial State (1967), he found his metier as a social critic. In these books and voluminous other writings, Galbraith didn't propose well-articulated economic theories, and carry out systematic empirical tests, but instead offered big-picture perspectives of the economy and society of his time. His policy advice was grindingly predictable: big and bigger doses of progressive liberalism, what he sometimes called "new socialism." 

For a sense of how mainstream and Democratic-leaning economists of the time dismissed Galbraith's work, classic example is this scathing-and-smiling review of The New Industrial State by Robert Solow in the Fall 1967 issue of The Public Interest. Galbreath's response appears in the same issue. Connoisseurs of academic blood sports will enjoy the exchange.

Here, I come not to quarrel with Galbraith's economics, but to praise him as one of the finest writers on economics and social science topics it has ever been my pleasure to read. I take as my text his essay on "Writing, Typing, and Economics," which appeared in the March 1978 issue of The Atlantic and which I recently rediscovered. Here are some highlights:

"All writers know that on some golden mornings they are touched by the wand — are on intimate terms with poetry and cosmic truth. I have experienced those moments myself. Their lesson is simple: It's a total illusion. And the danger in the illusion is that you will wait for those moments. Such is the horror of having to face the typewriter that you will spend all your time waiting. I am persuaded that most writers, like most shoemakers, are about as good one day as the next (a point which Trollope made), hangovers apart. The difference is the result of euphoria, alcohol, or imagination. The meaning is that one had better go to his or her typewriter every morning and stay there regardless of the seeming result. It will be much the same. ..."
"My advice to those eager students in California would be, "Do not wait for the golden moment. It may well be worse." I would also warn against the flocking tendency of writers and its use as a cover for idleness. It helps greatly in the avoidance of work to be in the company of others who are also waiting for the golden moment. The best place to write is by yourself, because writing becomes an escape from the terrible boredom of your own personality. It's the reason that for years I've favored Switzerland, where I look at the telephone and yearn to hear it ring. ..."
"There may be inspired writers for whom the first draft is just right. But anyone who is not certifiably a Milton had better assume that the first draft is a very primitive thing. The reason is simple: Writing is difficult work. Ralph Paine, who managed Fortune in my time, used to say that anyone who said writing was easy was either a bad writer or an unregenerate liar. Thinking, as Voltaire avowed, is also a very tedious thing which men—or women—will do anything to avoid. So all first drafts are deeply flawed by the need to combine composition with thought. Each later draft is less demanding in this regard. Hence the writing can be better. There does come a time when revision is for the sake of change—when one has become so bored with the words that anything that is different looks better. But even then it may be better. ..." 
"Next, I would want to tell my students of a point strongly pressed, if my memory serves, by Shaw. He once said that as he grew older, he became less and less interested in theory, more and more interested in information. The temptation in writing is just the reverse. Nothing is so hard to come by as a new and interesting fact. Nothing is so easy on the feet as a generalization. I now pick up magazines and leaf through them looking for articles that are rich with facts; I do not care much what they are. Richly evocative and deeply percipient theory I avoid. It leaves me cold unless I am the author of it. ..." 
"In the case of economics there are no important propositions that cannot be stated in plain language. Qualifications and refinements are numerous and of great technical complexity. These are important for separating the good students from the dolts. But in economics the refinements rarely, if ever, modify the essential and practical point. The writer who seeks to be intelligible needs to be right; he must be challenged if his argument leads to an erroneous conclusion and especially if it leads to the wrong action. But he can safely dismiss the charge that he has made the subject too easy. The truth is not difficult. Complexity and obscurity have professional value—they are the academic equivalents of apprenticeship rules in the building trades. They exclude the outsiders, keep down the competition, preserve the image of a privileged or priestly class. The man who makes things clear is a scab. He is criticized less for his clarity than for his treachery.
"Additionally, and especially in the social sciences, much unclear writing is based on unclear or incomplete thought. It is possible with safety to be technically obscure about something you haven't thought out. It is impossible to be wholly clear on something you do not understand. Clarity thus exposes flaws in the thought. The person who undertakes to make difficult matters clear is infringing on the sovereign right of numerous economists, sociologists, and political scientists to make bad writing the disguise for sloppy, imprecise, or incomplete thought. One can understand the resulting anger." 

Monday, August 24, 2015

The Human Breast Milk Market

The market for human breast milk starts with demand from hospitals for pre-term infants.. The American Academy of Pediatrics writes:
The potent benefits of human milk are such that all preterm infants should receive human milk. ...  Mother’s own milk, fresh or frozen, should be the primary diet, and it should be fortified appropriately for the infant born weighing less than 1.5 kg. If mother’s own milk is unavailable despite significant lactation support, pasteurized donor milk should be used.
The demand then continues with a belief that human milk might have properties that are useful to adults as well. Some biomedical companies are involved in research, and there is apparently a subculture of bodybuilders who believe that consuming human milk helps them build muscle.

What are the sources of supply to meet this demand? One source is donations that happen though the 19 locations of the Human Milk Banking Association of North America, as well as other donor organizations. But there are also for-profit companies emerging like Prolacta Bioscience and International Milk Bank which buy breast-milk, screen and test it, sometimes add additional nutrients, and then sells it to hospitals. There are also websites that facilitate buying and selling breast-milk.

This market is one where prices are fairly clear: the for-profit companies typically offer moms $1.50- $2 per ounce for breast milk, and end up selling it to hospitals for roughly $4 per ounce. Quantities are less clear, although for a rough sense, the nonprofit Human Milk Banking Association of North America dispensed 3.1 million ounces of breast milk in 2013, while a single for-profit firm, Prolacta, plans to process 3.4 million ounces this year.

Any product that involves a mixture of donated and paid-for elements is going to be a source for controversy, and when the product involves fluids from the human body, the controversy is going to ramp up one more level. Here are some of the issues:

Many people have a gut-level reaction that human breast milk for neonatal children is the sort of product that should be run on the basis of donations. But two concerns arise here, as enunciated by Julie P.  Smith in "Market, breastfeeding and trade in mother's milk," which appears earlier this year in the International Breastfeeding Journal (10:9). As Smith writes: "Human milk is being bought and sold. Commodifying and marketing human milk and breastfeeding risk reinforcing social and gender economic inequities. Yet there are potential benefits for breastfeeding, and some of the world’s poorest women might profit. How can we improve on the present situation where everyone except the woman who donates her milk benefits?" There are a number of ideas to unpack here.

First, a substantially expanded supply of breast-milk would improve the health prospects of pre-term infants. Donated breast-milk doesn't seem able to fill the need.

Second, it's not clear why mothers should be expected to pump, save and donate breast milk for free, when the rest of the health care system is getting paid. In some practical sense, the social choice may come to paying the health care system to address the sicknesses that infants experience from a lack of breast milk, or paying mothers for breast milk.

Third, there are real issues here involving social inequalities. Earlier this year in Detroit,  a company called Medolac announced a plan to purchase breast milk. It received a hostile open letter with a number of signatories, starting with the head of the Black Mothers' Breastfeeding Association. The letter read, in part:

[W]e are writing to you in the spirit of open dialogue about your company’s recent attempts to recruit African-American and low-income women in Detroit to sell their breast milk to your company, Medolac Laboratories. We are troubled by your targeting of African-American mothers, and your focus on Detroit in particular. We are concerned that this initiative has neither thoroughly factored in the historical context of milk sharing nor the complex social and economic challenges facing Detroit families. ... Around the country, African-American women face unique economic hardships, and this is no less true in our city. In addition, African American women have been impacted traumatically by historical commodification of our bodies. Given the economic incentives, we are deeply concerned that women will be coerced into diverting milk that they would otherwise feed their own babies.
Medolac withdrew its proposal. Without getting into the language of the letter ("commodification" and "coercion" are not being used in the sense of an economics class), the basic public health question remains: Given the very substantial health benefits of breast milk for infants, can it make sense to offer mothers a financial incentive to sell their breast milk? Especially knowing that this incentive will have greater weight for mothers in lower income groups?

Fourth, the economic choices involves in breastfeeding are inevitably intertangled with other choices that face nursing mothers. Julie Smith points out that there are a variety of incentives to encourage early weaning of infants, like the promotion of infant formula and baby food products, combined with laws and rules affecting how quickly new mothers will re-enter the workforce. Reconsidering these incentives in a broader context, with an eye to encouraging breastfeeding in all contexts, could potentially lead both to more breastfeeding and to greater supplies of donated breast milk. Smith writes;
‘The market’ fails to protect breastfeeding, because market prices give the wrong signals. An economic approach to the problem of premature weaning from optimal breastfeeding may help prioritise global maternity protection as the foundation for sustainable development of human capital and labour productivity. It would remove fiscal subsidies for breast milk substitutes, tax their sale to recoup health system costs, and penalise their free supply, promotion and distribution. By removing widespread incentives for premature weaning, the resources would be available for the world to invest more in breastfeeding.
Finally, in an internet-based economy that excels at connecting decentralized suppliers and buyers, there is no chance that the paid market for breast milk is going away/ At least some of the market--say, the demand from body-builders--is likely to remain shadowy. But for neonatal infants and research purposes, it is useful for the bulk of  the breast-milke market to come out of the shadows so that it can be subject to basic regulations, assuring that the breast milk isn't adulterated by cow's milk, microbes, or worse.

If you'd like another example of the potential for economic markets in bodily fluids, I discuss the arguments concerning how to increase the supply of blood in "Volunteers for Blood, Paying for Plasma" (May 16, 2014).  A proposal for using the recently dead as a source of blood donations is here.


Friday, August 21, 2015

Snapshots of Connected and Interactive in 2015

For 20 years, Mary Meeker--now of the venture capital firm Kleiner, Perkins, Caufield and Byers--has been presenting an annual overview of Internet trends that has become semi-legendary in the industry. If you'd like to listen to a speaker go through 196 Powerpoint slides in 25 minutes, the link to her presentation at the Internet Trends 2015--Code Conference on May 27, 2015 is here. If you just want the slides they are here. For those who like taking a drink from a fire hose of information, this presentation is for you.

Here, I'll just pass along a few slides that particularly caught my eye, on the general theme of how our interaction with media is evolving. The old model is about turning a station on or off, or going to a certain website to read what's there. The new model is toward greater interactivity. For example, here's a figuring that starts with the VCR and cable television back in the 1970s, as way in which users began to exercise more control over media, and points to the many ways in which this trend has expanded.



Of course, this change has now gone well beyond the ability to choose which movie to watch. Interactivity involves both individuals posting content, and looking at content posted by others. For example, YouTube reports that 300 hours of video are uploaded to the site every minute, Meeker offers a graph showing that Facebook is now up to 4 billion video views per day. 



Of course, this use of media isn't just about watching cat videos. It's more and more using mobile devices like smartphones or tablets for many purposes: news, directions, events, and more.



Indeed, many of the "millenials" in the 18-34 age bracket are umbilically attached to their smartphones.

The upshot of these kinds of changes is a rapid growth in the time spent each day using digital media---expecially with mobile connections. US adults are now up to more than five hours a day with digital media, double the level of seven years ago. 








Thursday, August 20, 2015

Shifting Visions of the "Good Job"

As the unemployment rate has dropped to 5.5% and less in recent months, the arguments over jobs have shifted from the lack of available jobs to the qualities of the jobs that are available. It's interesting to me how our social ideas of what constitutes a "good job" have a tendency to shift over time. Joel Mokyr, Chris Vickers, and Nicolas L. Ziebarth illuminate some of these issues in "The History of Technological Anxiety and the Future of Economic Growth: Is This Time Different?" which appears in the Summer 2015 issue of the Journal of Economic Perspectives.  All articles from JEP going back to the first issue in 1987 are freely available on-line compliments of the American Economic Association. (Full disclosure: I've worked as Managing Editor of the JEP since 1986.)

One theme that I found especially intriguing in the Mokyr, Vickers, and Ziebarth argument is how some of our social attitudes about what constitutes a "good job" have nearly gone full circle in the last couple of centuries. Back at the time of the Industrial Revolution in the late 18th and into the 19th century, it was common to hear arguments that the shift from farms, artisans, and home production into factories involved a reduction in the quality of work. But in recent decades, a shift away from factories and back toward decentralized production is sometimes viewed as a decline in the quality of work, too. Here are some examples:

For example, one concern from the time of the original Industrial Revolution was that factory work required scheduling their time in ways that removed flexibility. Mokyr, Vickers, and Ziebarth (citations omitted) note: "Workers who were “considerably dissatisfied, because they could not go in and out as they pleased” had to be habituated into the factory system, by means of fines, locked gates, and other penalties. The preindustrial domestic system, by contrast, allowed a much greater degree of flexibility."

Another type of flexibility in the time before the Industrial Revolution is that people often had the flexibility to combine their work life with their home life, and the separation of the two was thought be worrisome: "Part of the loss of control in moving to factory work involved the physical separation of home from place of work. While today people worry about the exact opposite phenomenon with the lines between spheres of home and work blurring, this disjunction was originally a cause of great anxiety, along with the separation of place-of-work from place-of-leisure. Preindustrial societies had “no clearly defined periods of leisure as such, but economic activities, like hunting or market-going, obviously have their recreational aspects, as do singing or telling stories at work.”

Of course, some common modern concerns about the quality of jobs is that many jobs lack regular hours. Many workers may face irregular hours, or no assurance of a minimum number of hours they can work. Moreover, many jobs now worry that work life is intruding back into  home life, because we are hooked to our jobs by our computers and phones.  Mokyr, Vickers, and Ziebarth write:

"Even if ongoing technological developments do not spell the end of work, they will surely push certain characteristics of future jobs back toward pre-factory patterns. These changes involve greater flexibility in when and where work takes place. Part and parcel of this increase in flexibility is the breakdown of the separation between work and home life. The main way in which flexibility seems to be manifesting itself is not through additional self-employment, but instead through the rise of contract firms who serve as matchmakers, in a phenomenon often driven by technology. For example, Autor (2001) notes that there was a decline in independent contractors, independent consultants, and freelancers as a portion of the labor force from 1995 to 1999—peak years for expansion of information technology industries—though there was a large increase in the fraction of workers employed by contract firms. The Census Bureau’s counts “nonemployer businesses,” which includes, for example, people with full-time employment reported in the Current Population Survey but who also received outside consulting income. The number of nonemployer businesses has grown from 17.6 million in 2002 to 22.7 million in 2012. In what is sometimes called the “sharing economy,” firms like Uber and AirBnB have altered industries like cab driving and hotel management by inserting the possibility of flexible employment that is coordinated and managed through centralized online mechanisms. ...
[C]ertain kinds of flexibility have become more prevalent since 2008, particularly flexibility with regard to time and place during the day, making it possible for workers to attend to personal or family needs. On the other side, flexibility can be a backdoor for employers to extract more effort from employees with an expectation that they always be accessible. ...  Also, flexibility can often mean variable pay. The use of temp and contract workers in the “on-demand” economy (also known as contingent labor or “precarious workers”) has also meant that these workers may experience a great deal of uncertainty as to how many hours they will work and when they will be called by the employers. Almost 50 percent of part-time workers receive only one week of advance notice on their schedule."
Another a fairly common theme of economists writing back in the 18th and 19th centuries ranging from Adam Smith to Karl Marx was that the new factor jobs treated people as if they were cogs in a machine.
"Adam Smith (1776, p. 385) cautioned against the moral effects of this process, as when he wrote: “The man whose whole life is spent in performing a few simple operations . . . generally becomes as stupid and ignorant as it is possible for a human creature to become.” Karl Marx, more well-known than Smith as a critic of industrialization, argued that the capitalist system alienates individuals from others and themselves. ... For Marx and others, it was not just that new factory jobs were dirty and dangerous. Jeffersonian encomiums aside, the pastoral life of small shop owners or yeoman farmers had not entailed particularly clean and safe work either. Instead the point was that this new work was in a deeper way unfit for humans and the process of covert coercion that forced people into these jobs and disciplined them while on the job was debasing."
Now, of course, there is widespread concern about a lack of factory jobs for low- and middle-skilled workers. Rather than worrying about these jobs being debasing or unfit for humans, we worry that there aren't enough of them.

I guess one reaction to this evolution of attitudes about "good jobs" is just to point out that workers and employers are both heterogenous groups. Some workers put a greater emphasis on flexibility of hours, while others might prefer regularity. Some workers prefer a straightforward job that they can leave behind at the end of the day; others prefer a job that is full of improvisation, learning on the fly, crises, and deadlines. To some extent, the labor market lets employers and workers match up as they desire.  There's certainly no reason to assume that a "good job" should be a one-size-fits-all definition.

A second reaction is that there is clearly a kind of rosy-eyed nostalgia at work about the qualities of jobs of the past. Many of us tend to focus on a relatively small number of past jobs, not the jobs that most people did most of the time. In addition, we focus on a few characteristics of those jobs, not the way the jobs were actually experienced by workers of that time.

But yet another reaction is that the qualities of available jobs aren't just a matter of negotiation between workers and employers, and they aren't an historical inevitability. The qualities of the range of jobs in an economy are afffected by a range of institutions and factors like the human capital that workers bring to jobs, the extent of on-the-job training, how easy it is for someone with a series or employers or irregular hours to set up health insurance or a retirement account, rules about workplace safety, rules that impose costs on laying off or firing workers (which inevitably makes firms reluctant to hire more regular employees), the extent and type of union representation, rules about wages and overtime, and much more. I do worry that career-type jobs offering the possibility of longer-term connectedness between a worker and an employer seem harder to come by. In a career-type job, both the worker and employer place some value on the expected continuance of their relationship over time, and act and invest resources accordingly.

Tuesday, August 18, 2015

Carbon Tax: Practicalities of Cutting a Deal

The key practical questions about a carbon tax include what should be taxed and how much is should be taxed. The what is fairly clear; the how much is fuzzier. But if advocates of a carbon tax could agree on the size and shape of such a tax, they could offer some interesting incentives for political wheeling and dealing. Donald Marron, Eric Toder, and Lydia Austin write about "Taxing Carbon: What, Why, and How" in a June 2015 exposition for the Tax Policy Center (which is a joint venture of Brookings and the Urban Institute).

Discussions of climate change typically focus on carbon emissions, but there are other greenhouse gases (and non-gases, like soot) that also affect can trap heat in the atmosphere. Here's a list of the major greenhouse gases, and how much heat they trap relative to carbon. It's common in discussions of this subject to refer to all taxes on greenhouse gases as a "carbon tax," and to express the emissions of other gases in terms of "carbon dioxide equivalents."




Marron, Toder, and Austin write:

"[P]olicymakers must address the fact that greenhouse gases differ in their chemical and atmospheric properties. Methane, for example, traps more heat, gram-for-gram, than carbon dioxide does, but it has a shorter atmospheric lifetime. A cost-effective tax should reflect such differences, raising the tax rate for gases that are more potent and lowering it for gases that stay in the atmosphere for less time. Analysts have developed measures known as global warming potentials to make such comparisons. According to the potentials the EPA uses, methane is 21 times more potent than carbon dioxide over a century, and nitrous oxide is 310 times as potent (table 1). By those measures, a $10 per  ton tax on carbon dioxide would imply a $210 per ton tax on methane and a $3,100 per ton tax on nitrous oxide."

But those numbers are about the ratio of taxes on greenhouse gases relative to each other. What should the actual taxes themselves be? Economists argue that the price placed on greenhouse gas emissions should be set according to the damage caused by those emissions--in effect, consumption that leads to carbon emissions should pay the price for harm caused. But estimating the social cost of carbon emissions is very difficult, and estimates are all over the map. Marron, Toder, and Austin: 
Estimates of the marginal social cost of carbon thus vary widely. In developing a cost to inform US climate policy, an interagency working group commissioned 150,000 simulations from three leading models, all using the same 3 percent real discount rate. The resulting estimates fell mostly in the -$10 to $50 per ton range (in today’s dollars), with a few lower and some significantly higher. The central tendency was a cost of $27 per ton in 2015 and rising in the future. An update increased that figure to about $42 per ton in 2015, with estimates again ranging from slightly below zero to more than $100. These wide ranges, and the underlying uncertainty about long-term economic and geophysical  responses to rising greenhouse gas concentrations, have left some analysts pessimistic about the ability of such modeling efforts to identify an appropriate price for carbon. ... 
Along with all the uncertainties of how these emissions affect climate, and the placing an economic value on these changes, another problem is that the benefits of limiting climate change are international, but the costs of a US carbon tax are national. Of course, the ultimate hope is for a coordinated international effort, but with the world moving to more carbon-intensive energy sources, there's no assurance this will happen. Marron, Toder, and Austin point out:

"[A] coordinated international response should focus on worldwide emissions and impacts. If a nation considers unilateral action, however, it must decide whether to focus on domestic costs and benefits or to consider other nations as well. The difference is large. Greenstone, Kopits, and Wolverton estimate that the United States bears only 7 to 10 percent of the worldwide marginal social costs of carbon. If each new metric ton of carbon dioxide emissions imposes $40 in worldwide damages, only $3 to $4 would fall on the United States. They argue that the United States ought to use the global measure when evaluating regulatory policies, but this view is not universal. Indeed, policymakers take a US-only view when evaluating other energy and environmental policies that
have international spillovers."

In the face of such uncertainties, a possible approach is to set a relatively low carbon tax that would slowly rise over time. The rising level over time would encourage actions to reduce emissions of greenhouse gases over time. And there could be an official process, perhaps every 5-10 years, for evaluating if the tax was at roughly the correct level. With some level of a carbon tax in mind, the question becomes one of practical politics. One of my takeaways from the Marron, Toder, and Austin essay is that advocates of a carbon tax have some arguments they could make that might intrigue the undecided middle ground. Here are several such arguments:

1) A carbon tax would reduce the use of fossil fuels, and thus would reduce a number of conventional pollutants. The gains or "co-benefits" from reducing conventional pollutants are substantial--indeed, by some estimates the co-benefits of a moderate carbon tax might make the tax worthwhile even if reducing carbon emissions brought no other gains. Marron, Toder, and Austin put it this way:

 Climate change is not the only harm associated with burning fossil fuels. Power plants, factories, vehicles, and other sources also emit air pollutants that directly harm human health, including fine particulate matter, sulfur dioxide, and nitrogen oxides. Vehicle use also imposes other external costs, including congestion, road damage, and accidents. ... As a result, a carbon tax would generate “co-benefits”—improvements in human health and well-being unrelated to climate concerns. The magnitude of those co-benefits depends on several factors, including the prevalence and value of potential health improvements (e.g., reduced asthma, bronchitis, heart attacks) and the scope of benefits included (e.g., just air pollution from fossil fuels or also congestion and accidents that result from driving). In a comprehensive analysis including both air pollution and vehicle externalities, Parry, Veung, and Heine estimate that the co-benefits of a carbon tax in the United States would be about $35 per ton. In a narrower analysis of the co-benefits from its proposed regulations on power plants, the EPA estimates that the co-benefits of reduced air pollution are at least as large as potential climate benefits. These estimates thus suggest that, in the absence of new policies addressing those harms, a substantial carbon tax would improve US well-being even if we give no weight to climate
change.
2) The revenues from a carbon tax could allow offsetting tax cuts in other area. Taxes discourage certain behaviors, and it's surely better to discourage carbon emissions than to discourage work and saving. Marron, Toder, and Austin write:

The main objective of a carbon tax is to reduce environmental damage by encouraging producers and consumers to cut back on activities that release greenhouse gases. This is its first dividend. A carbon tax can also generate a second dividend: an improvement in economic efficiency by using the resulting revenue to reduce distortionary taxes, such as those on income or payroll. ... For legislative purposes, the most important estimates are those of the Congressional scoring agencies, the Joint Committee on Taxation, and the Congressional Budget Office. In late 2013, they estimated the revenue effects of a tax on most greenhouse emissions starting at $25 per ton and increasing 2 percent faster than inflation. Scaling those estimates to CBO’s latest budget projections, they imply net revenue of about $90 billion in its first complete year and about $1.2 trillion over its first decade.

3)  A meaningful carbon tax would mean that there was less need for a range of other government rules and regulations. For example, other rules about vehicle efficiency or conservation would be less necessary. Subsidies for renewable energy sources could be scaled back, because they could benefit from not paying the carbon tax instead. One last time, here are Marron, Toder, and Austin:

In the absence of a broad, substantial price on carbon, policymakers have attempted to reduce carbon emissions through a mix of narrower policies. The Environmental Protection Agency is developing emissions standards for new and existing power plants, the Department of Transportation has expanded vehicle fuel economy standards, and the Department of Energy has expanded appliance energy efficiency standards. Tax subsidies and renewable fuel standards favor renewable and low-carbon fuels, such as wind, solar, biomass, geothermal, and nuclear, and biodiesel and electric vehicles. A sufficiently high and broad carbon tax would reduce the benefit of these policies. If policymakers contemplate such a tax, it would be appropriate to reassess these policies to see whether their benefits justify their costs.

Some of those concerned about climate change won't like the policy tradeoffs I'm suggesting here. But lasting political coalitions need broad support.

Monday, August 17, 2015

Economics of Information Overload: Thoughts from Herb Simon

I tend to think of information overload as a 21st century problem, but serious folks were talking about it almost 50 years ago. In an essay published in 1971, Herbert A, Simon (who would win the Nobel prize in economics in 1978) offered the insight that "a wealth of information creates a poverty of attention." Simon's 1971 essay on  “Designing Organizations for an Information-Rich World” appears in a volume edited by Martin Greenberger called Computers, Communications, and the Public Interest Johns Hopkin Press, 1971, pp. 37-52). Here's the context for Simon's remark, and a few other thoughts from his essay and his comments in the panel conversation that followed that caught my eye: 


A wealth of information creates a poverty of attention
"Last Easter, my neighbors bought their daughter a pair of rabbits. Whether by intent or accident, one was male, one female, and we now live in a rabbit-rich world. Person less fond than I am of rabbits might even describe it a rabbit-overpopulated world. Whether a world is rich or poor in rabbits is a relative matter. Since food is essential for biological populations, we might judge the world as rabbit-rich or rabbit-poor by relating the number of rabbits to the amount of lettuce and grass (and garden flowers) available for rabbits to eat. A rabbit-rich world is a lettuce-poor world, and vice versa. The obverse of a population problem is a scarcity problem, hence a resource-allocation problem. There is only so much lettuce to go around, and it will have to be allocated somehow among the rabbits. Similarly, in an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence, a wealth of information creates a poverty of attention and a  need to allocate that attention efficiently among the overabundance of information resources that might consume it.
A large share of the costs of an information-rich environment are carried by information users, not information providers. 
"In an information-rich world, most of the cost of information is the cost incurred by the recipient. It is not enough to know how much it costs to produce and transmit information: we much also know now much it costs, I terms of scarce attention, to receive it. I have tried bringing this argument home to my friends by suggesting that they calculate how much  the New York Times (or the Washington Post) costs them, including the costs of reading it. Making the calculation usually causes them some alarm, but not enough for them to cancel their subscriptions. Perhaps the benefits still outweigh the costs."
Does your information-processing system listen more than it talks?  
"An information-processing subsystem (a computer or a new organizational unit) will reduce the net demand on the rest of the organization’s attention only if it absorbs more information previously received by others than it produces—that is, if it listens and thinks more than it speaks. ... The design principle that attention is scarce and must be preserved is very different from the principle of “the more information the better.” … The proper aim of a management information system is not to bring the manager all the information he needs, but to reorganize the manager’s environment of information so as to reduce the amount of time he must devote to receiving it."
Humans may be poorly adapted to disregard information readily enough.  
"Our attitudes toward information reflect the culture of poverty. We were brought up on Abe Lincoln walking miles to borrow (and return!) a book and reading it by firelight. Most of us are constitutionally unable to throw a bound volume into the wastebasket. We have trouble enough disposing of magazines and newspapers. Some of us are so obsessed with the need to know that we feel compelled to read everything that falls into our hands, although the burgeoning of the mails is helping to cure us of this obsession. If these attitudes were highly functional in the world of clay tablets, scribes, and human memory; if they were at least tolerable in the world of the printing press and the cable; they are completely maladapted to the world of broadcast systems and Xerox machines."
The traditional solutions to information overload still work. 

"Even before television, we lived in an environment of information conveyed mostly by our neighbors, including some pretty tall tales. We acquired a variety of techniques for dealing with information overload. We know that there are people who can talk faster than we can and give us an argument on almost any topic. We listen patiently, because we cannot process information fast enough to refute them; that is, until the next day, when we find the hole in their argument. A relevant rule that my father taught me was, "Never sign in the presence of a salesman." By adopting such rules and their extensions, we allow ourselves the extra processing time needed to deal with the information overload. ... I think that all levels of intelligence, human beings have common sense protecting them from the worst features of their information environment. If information overload ever really gets the best of me, my last resort is to follow the advice of Gertrude Stein in the opening pages of The Autobiography of Alice B. Toklas: `I like a view, but I like to sit with my back turned to it.'"




Friday, August 14, 2015

Europe: When the Macro Overshadows the Micro

Christian Thimann  currently works with the French investment bank AXA while also holding an academic position at the Paris School of Economics. However, from 2008 to 2013 he was Director General and Adviser to the President at the European Central Bank, which makes his views on the economics and politics of the euro crisis especially worth considering. He lays out his perspective in
"The Microeconomic Dimensions of the Eurozone Crisis and Why European Politics Cannot Solve Them," which appears in the Summer 2015 issue of the Journal of Economic Perspectives. Like all JEP articles, it is freely available online courtesy of the American Economic Association. (Full disclosure: I've worked as Managing Editor of JEP since the first issue of the journal in 1987.)

On the economics of the eurozone, Thimann argues that the problems have microeconomic roots, not just macroeconomic ones. Here are a couple of intriguing figures. Thimann points out that since the inception of the euro, some economies have consistently run trade surpluses, while others have consistently run trade deficits. This figure shows the cumulative trade surpluses and deficits over time. What's especially interesting to me is the relative steadiness of these lines: countries with trade surpluses tend to add surpluses every year, countries with deficits tend to add deficits every year.


Thimann argues that a driving factor behind these trade imbalances arises out of the interaction between wages and productivity. If wages in a country are growing a lot faster than productivity, then in effect, the cost of producing in that country is rising and it will be harder for that country to sell in international markets. If two countries share the same currency, so that exchange rate adjustments are not possible, then a country where wages are growing much faster than productivity will be at a competitive disadvantage compared with countries where wage growth is more closely aligned with productivity growth. Thimann points out that in the the trade deficit countries, compensation soared well above productivity growth almost as soon as the euro was in place.


Why is Greece not shown among the countries here? Thimann writes in the note under the table: "Greece is not shown in the chart because, while the productivity increase is broadly comparable to that of Portugal, the wage growth was even steeper, rising by 2008 to 180 percent of the 1998 value, hence exceeding the scale of the countries shown; wages have declined by about 20 percent since the crisis to 160 percent."

Why did wages rise so quickly in the trade deficit countries? Some countries saw real estate bubbles or surges in government borrowing that pushed up wages in a way that productivity growth could not sustain. Public-sector wages took off: "Over the first ten years of the euro, public wages grew by 40 percent in the eurozone as a whole and by 30 percent in Germany. But public sector wages rose by 50 percent in France, 60 percent in Italy, 80 percent in Spain, 110 percent in Greece, and 120 percent in Ireland." A common justification given for the rapid wage increases was that price levels in many of the trade deficit countries were rising, often at 6-7% per year, and so there was a perceived need for wages to keep up. But for the purposes of international trade and competitiveness, what matters is the wage--not the rise in local-country prices.

 Thimann goes into some detail about how the trade deficit countries in the eurozone also tended to impose rules and regulations leading to higher wages and restrictions on business. My favorite story of the heavy hand of regulation in Greece is one that Megan Green told on her blog back in 2012 , but I've been telling it ever since. It's about finding yourself in a combination bookstore/coffee shop in Athens which, because of regulations, is not allowed at that time to sell books or coffee. Green writes:
This is best encapsulated in an anecdote from my visit to Athens. A friend and I met up at a new bookstore and café in the centre of town, which has only been open for a month. The establishment is in the center of an area filled with bars, and the owner decided the neighborhood could use a place for people to convene and talk without having to drink alcohol and listen to loud music. After we sat down, we asked the waitress for a coffee. She thanked us for our order and immediately turned and walked out the front door. My friend explained that the owner of the bookstore/café couldn’t get a license to provide coffee. She had tried to just buy a coffee machine and give the coffee away for free, thinking that lingering patrons would boost book sales. However, giving away coffee was illegal as well. Instead, the owner had to strike a deal with a bar across the street, whereby they make the coffee and the waitress spends all day shuttling between the bar and the bookstore/café. My friend also explained to me that books could not be purchased at the bookstore, as it was after 18h and it is illegal to sell books in Greece beyond that hour. I was in a bookstore/café that could neither sell books nor make coffee.
One story like this is a comedy. An economy in which stories like this are commonplace--and which is locked into a free-trade zone with countries sharing a common currency, is a tragedy waiting to happen.

On the politics of the eurozone, Thimann argues that the euro, the European Central Bank, and all the European-wide negotations over debt overshadowing these other issues. Normally, when a democratic country has miserable economic performance with high unemployment and slow growth, a common response is for its citizens to demand some policy changes from their politicians. But in the euro-zone, when a country has a miserable economic performance, the politicians of that country tell the citizens that it's not their fault. It's all the fault of the Euro-crats in Brussels, or Germans pulling strings behind the scenes, or the ECB. The politicians tell the voters that self-examination unnecessary and even counterproductive, because they to unite against the malign outsiders.

Here are some concluding thoughts from Thimann:

At the core of the economic crisis in the eurozone is the problem of unemployment in several countries. Roughly 18.2 million people are unemployed in early 2015. In about half the eurozone countries, the unemployment rate is below 10 percent, and in Germany it is actually below 5 percent (Eurostat data, February 2015), but in France, 10.7 percent of the labor force are unemployed; in Italy, 12.7 percent; in Portugal, 14.1 percent; in Spain, 23.2 percent; and in Greece, 26.0 percent. ...
It is legitimate to speak about this as a problem for the eurozone in the sense that economic policies in a single currency area are truly a matter of common concern, and also because high unemployment interferes with the smooth functioning of the eurozone, challenging its economic and political cohesion. But it is not accurate to attribute responsibility for the problem, or the solution, to the eurozone as a whole, to European institutions, or to other countries. Jobs fail to be created in a number of these countries not because of a “lack of demand” as often claimed, but mainly because wage costs are high relative to productivity, social insurance and tax burdens are heavy, and the business environment is excessively burdensome. All of this should be viewed not in absolute terms, but in relative terms, compared with other economies in Europe and countries around the world where labor costs and productivity are more advantageous, and the business environment is friendlier.
“Europe” is not an all-powerful actor in the field of national economic policies, but only a potentially useful facilitator. Only the country concerned is the legitimate and able party to improve its own economic functioning in line with its social preferences and economic setup. This is why European politics cannot solve the microeconomic dimensions of the eurozone crisis. Within individual countries, it is the governments, administrative authorities, social partners, and all other economic stakeholders that are the legitimate actors in the field of economic and social policies....
For the eurozone countries, their economic and unemployment problems are not primarily a question about some countries versus other countries within the monetary union, but about finding their place in an open global economy—that is, about competing and cooperating successfully with advanced, emerging, and developing economies across the globe. An inward-looking European debate on the distribution of the relative adjustment burden for structural reforms would dramatically overlook the much broader challenges of integration into the global economy. ... It may be more glamorous to focus on European monetary policy, the “European architecture,” or the “bigger macro picture.” But the real issue of—and solution to—the crisis in the eurozone lies in the mostly microeconomic trenches of national economic, social, and structural policies.
I think Thimann may understate the fundamental macroeconomic problems that are being created by the presence of the euro (as I've discussed here and here, for example). But he seems to me quite correct to emphasize that many European countries badly need structural, regulatory, and microeconomic adjustments. Moreover, politicians and voters in many of these countries would much rather assail the rest of Europe about international negotiations involving public debt and the euro, rather than face their domestic political issues.

Thursday, August 13, 2015

US Mergers and Antitrust in 2014

Each year the Federal Trade Commission and and the Department of Justice Antitrust Division publish the Hart-Scott-Rodino Annual Report, which offers an overview of merger and acquisition activity and antitrust enforcement during the previous year. The Hart-Scott-Rodino legislation requires that all mergers and acquisitions above a certain size--now set at $75.9 million--be reported to the antitrust authorities before they occur. The report thus offers an overview of recent merger and antitrust activity in the United States.

For example, here's a figure showing the total number of mergers and acquisitions reported.  There was a substantial jump in the total number of mergers in 2014, not quite back to the higher levels of 2006 and 2007, but headed in that direction.


The report also provides a breakdown on the size of mergers. Here's what it looked like in 2014. As the figure shows, there were 225 mergers and acquisitions of more than $1 billion.

After a proposed merger is reported, the FTC or the US Department of Justice can request a "second notice" if it perceives that the merger might raise some anticompetitive issues. In the last few years, about 3-4% of the reported mergers get this "second request." This percentage may seem low, but it's not clear that it is too low. After all, the US government isn't second-guessing whether mergers and acquisitions make sense from a business point of view. It's only asking whether the merger might reduce competition in a substantial way. If two companies that aren't directly competing with other combine, or if two companies combine in a market with a number of other competitors, the merger/acquisition may turn out well or poorly from a business point of view, but it is less likely to raise competition issues.

Teachers of economics may find the report a useful place to come up with some recent examples of antitrust cases, and there are also links to some of the underlying case documents and analysis  (which students can be assigned to read). Here are a few examples. In the first one, a merger was questioned and called off. In the second, a merger was allowed only after a number of plants were divested, so as to allow competition to continue. The third case involved an airline merger in which the transaction was only allowed to proceed with provisions that required divestiture of gates and landing slots at a number of airports, thus opening the way for competition from other airlines.

In Jostens/American Achievement Group, the Commission issued an administrative complaint and authorized staff to seek a preliminary injunction in federal district court enjoining Jostens, Inc.’s proposed $500 million acquisition of American Achievement Corp. The Commission alleged that the acquisition would have substantially reduced quality and price competition in the high school and college class rings markets. Shortly after the Commission filed its administrative complaint, the parties abandoned the transaction. ...

In April 2014, the FTC also concluded its 2013 challenge to Ardagh Group SA’sproposed acquisition of Saint-Gobain Containers, Inc. The $1.7 billion merger would have allegedly concentrated most of the $5 billion U.S. glass container industry in two companies – the newly combined Ardagh/Saint-Gobain, and Owens-Illinois, Inc. These two companies would have controlled about 85 percent of the glass container market for brewers and 77 percent of the market for distillers, reducing competition and likely leading to higher prices for customers that purchase beer or spirits glass containers. The FTC filed suit in July 2013 to stop the proposed transaction. While the challenge was pending, Ardagh agreed to sell six of its nine glass container manufacturing plants in the United States to a Commission-approved buyer.

In United States, et al. v. US Airways Group, Inc. and AMR Corporation, the Division and the states of Texas, Arizona, Pennsylvania, Florida, Tennessee, Virginia, and the
District of Columbia challenged the proposed $11 billion merger between US Airways Group, Inc. (“US Airways”) and American Airlines’ parent company, AMR Corporation. On April 25, 2014, the court entered the consent decree requiring US Airways and AMR Corporation to divest slots and gates in key constrained airports across the United States. These divestitures, the largest ever in an airline merger, have allowed low cost carriers to fly more direct and connecting flights in competition with legacy carriers and have enhanced system-wide competition in the airline industry.


Wednesday, August 12, 2015

China's Economic Growth: Pause or Trap?

China's rate of economic growth has slowed from the stratospheric 10% per year that it averaged from 1980 through 2010 to a merely torrid 8% per year from 2011-2014, and perhaps even a little slower in 2015. Now that China is on the verge of being the world's largest economy, the question of its future growth matters not just for 1.3 billion Chinese, but for the global economy as a whole. Zheng Liu offers some insight into the slowdown in a short essay "Is China's Growth Miracle Over," published by as an "Economic Letter" by the Federal Reserve Bank of San Francisco (August 10, 2015).

For some overall perspective, the blue line shows annual per capita growth of GDP for China. The red line shows annual per capita growth of GDP for the US economy. These differences in annual growth rates are extraordinary,and they do add up, year after year after year.



In the last few years, China's growth patterns have been heavily affected by a slowdown in its exports during the Great Recession and its aftermath, and by an enormous surge in debt. But over the longer  term, the key question is whether China's growth rate slows down. Liu offers a thought-provoking figure here with a comparison of China with Japan and South Korea. China's growth is shown by the rectangles and you can see, for example, that it's growth in 2014 was below the earlier levels from the last few decades. Japan's growth is shown by the red circles, and you can see how its growth fell from the 1960 to the 1970s, and more recently to the circles at the bottom right. Korea's growth in shown by the triangles, and again, you can see the decline in growth rate over time.






So is China's going to keep growing at a annual rate around 7%, or will it fall down to the 2-3% range? Here's the capsule case for pessimism, and then for optimism, from Liu.

China had a real GDP per capita of about $2,000 in the 1980s, which rose steadily to about $5,000 in the 2000s and to over $10,000 in 2014. If China continues to grow at a rate of 6 or 7%, it could move into high-income status in the not-so-distant future. However, if China’s experience mirrors that of its neighbors, it could slow to about 3% average growth by the 2020s, when its per capita income is expected to rise to about $25,000.
This may appear to be quite a pessimistic scenario for China, but China’s long-term growth prospects are challenged by a number of structural imbalances. These include financial repression, the lack of a social safety net, an export-oriented growth strategy, and capital account restrictions, all of which contributed to excessively high domestic savings and trade imbalances. According to the National Bureau of Statistics of China, the household saving rate increased from 15% in 1990 to over 30% in 2014. High savings have boosted domestic investment, but allocations of credit and capital remain highly inefficient. The banking sector is largely state-controlled, and bank loans disproportionately favor state-owned enterprises (SOEs) at the expense of more productive private firms. According to one estimate, the misallocation of capital has significantly depressed productivity in China. If efficiency of capital allocations could be improved to a level similar to that in the United States, then China’s total factor productivity could be increased 30–50% ...
Despite the slowdown, there are several reasons for optimism. First, China’s existing allocations of capital and labor leave a lot of room to improve efficiency. If the proposals for financial liberalization and fiscal and labor market reforms can be successfully put in place, improved resource allocations could provide a much-needed boost to productivity. Second, China’s technology is still far behind advanced countries’. According to the Penn World Tables, China’s total factor productivity remains about 40% of the U.S. level. If trade policies such as exchange rate pegs and capital controls are liberalized—as intended in the reform blueprints—then China could boost its productivity through catching up with the world technology frontier. Third, China is a large country, with highly uneven regional development. While the coastal area has been growing rapidly in the past 35 years, its interior region has lagged. As policy focus shifts to interior region development, growth in the less-developed regions should accelerate. With the high-speed rails, airports, and highways already built in the past few years, China has paved the way for this development. As the interior area catches up with the coastal region, convergence within the country should also help boost China’s overall growth 
I'm a long-term optimist about China's economic growth, both because I think it has been laying the basis for future growth by boosting education, investment, and technology, and also because China's economy as a whole economy (not just the richer coastal areas) still has a lot of room for catch-up growth. But China's economy has reached a size where it urgently needs better functioning from its banks and financial system, and in the shorter term, the problems of the financial system are likely to keep getting the headlines.