Monday, November 20, 2017

Why Has Life Insurance Ownership Declined?

Back in the first half of the 19th century, life insurance was unpopular in the US because it was broadly considered to be a form of betting with God against your own life. After a few decades of insurance company marketing efforts, life insurance was transformed into a virtuous purchase for any good and devout husband. But in recent decades, life insurance has been in decline.

Daniel Hartley, Anna Paulson, and Katerina Powers look at recent patterns of life insurance and bring the puzzle of its decline into sharper definition in "What explains the decline in life insurance ownership?" in Economic Perspectives, published by the Federal Reserve Bank of Chicago (41:8,   2017). The story of shifting attitudes toward life insurance in the 19th century US is told by Viviana A. Zelizer in a wonderfully thought-provoking 1978 article, "Human Values and the Market: The Case of Life Insurance and Death in 19th-Century America," American Journal of Sociology (November 1978, 84:3, pp. 591-610).

With regard to recent patterns, Hartley, Paulson, and Powers write: "Life insurance ownership has declined markedly over the past 30 years, continuing a trend that began as early as 1960. In 1989, 77 percent of households owned life insurance (see figure 1). By 2013, that share had fallen to 60 percent." In the figure, the blue line shows any life insurance, the red line shows the decline in term life, and the gray line shows the decline in cash value life insurance.


Early the 19th century, the costs of death and funerals were largely a family and neighborhood affair. As Zelizer points out, attitudes at the time, life insurance was commercially unsuccessful because it was viewed as betting on death. It was widely believed that such a bet might even hasten death, with with blood money being received by the life insurance beneficiary. For example, Zelizer wrote:

"Much of the opposition to life insurance resulted from the apparently speculative nature of the enterprise; the insured were seen as `betting' with their lives against the company. The instant wealth reaped by a widow who cashed her policy seemed suspiciously similar to the proceeds of a winning lottery ticket. Traditionalists upheld savings banks as a more honorable economic institution than life insurance because money was accumulated gradually and soberly. ...  A New York Life Insurance Co. newsletter (1869, p. 3) referred to the "secret fear" many customers were reluctant to confess: `the mysterious connection between insuring life and losing life.' The lists compiled by insurance companies in an effort to respond to criticism quoted their customers' apprehensions about insuring their lives: "I have a dread of it, a superstition that I may die the sooner" (United States Insurance Gazette [November 1859], p. 19). ... However, as late as the 1870s, "the old feeling that by taking out an insurance policy we do somehow challenge an interview with the 'king of terrors' still reigns in full force in many circles" (Duty and Prejudice 1870, p. 3). Insurance publications were forced to reply to these superstitious fears. They reassured their customers that "life insurance cannot affect the fact of one's death at an appointed time" (Duty and Prejudice 1870, p. 3). Sometimes they answered one magical fear with another, suggesting that not to insure was "inviting the vengeance of Providence" (Pompilly 1869). ... An Equitable Life Assurance booklet quoted wives' most prevalent objections: "Every cent of it would seem to me to be the price of your life .... it would make me miserable to think that I were to receive money by your death .... It seems to me that if [you] were to take a policy [you] would be brought home dead the next day" (June 1867, p. 3)."
However, over the course of several decades, insurance companies marketed life insurance with a message that it was actually a loving duty to one's family for a devout husband. As Zelizer argues, the rituals and institutions of what society viewed as a "good death" altered. She wrote:
"From the 1830s to the 1870s life insurance companies explicitly justified their enterprise and based their sales appeal on the quasi-religious nature of their product. Far more than an investment, life insurance was a `protective shield' over the dying, and a consolation `next to that of religion itself' (Holwig 1886, p. 22). The noneconomic functions of a policy were extensive: `It can alleviate the pangs of the bereaved, cheer the heart of the widow and dry the orphans' tears. Yes, it will shed the halo of glory around the memory of him who has been gathered to the bosom of his Father and God' (Franklin 1860, p. 34). ... life insurance gradually came to be counted among the duties of a good and responsible father. As one mid-century advocate of life insurance put it, the man who dies insured and `with soul sanctified by the deed, wings his way up to the realms of the just, and is gone where the good husbands and the good fathers go' (Knapp 1851, p. 226). Economic standards were endorsed by religious leaders such as Rev. Henry Ward Beecher, who pointed out, `Once the question was: can a Christian man rightfully seek Life Assurance? That day is passed. Now the question is: can a Christian man justify himself in neglecting such a duty?' (1870)."
Zelizer's work is a useful reminder that many products, including life insurance, are not just about prices and quantities in the narrow economic sense, but are also tied to broader social and institutional patterns.  

The main focus of Hartley, Paulson, and Powers is to explore the extent to which shifts in socioeconomic and demographic factors can explain the fall in life insurance: that is, have socioeconomic or demographic groups that were less likely to buy life insurance become larger over time? However, after doing a breakdown of life insurance ownership by race/ethnicity, education level, and income level, they find that the decline in life insurance is widespread across pretty much all groups. In other words, the decline in life insurance doesn't seem to be (primarily) about socioeconomic or demographic change, but rather about other factors. They write: 
"Instead, [life insurance] ownership has decreased substantially across a wide swath of the population. Explanations for the decline in life insurance must lie in factors that influence many households rather than just a few. This means we need to look beyond the socioeconomic and demographic factors that are the focus of our analysis. A decrease in the need for life insurance due to increased life expectancy is likely to be an especially important part of the explanation. In addition, other potential factors include changes in the tax code that make the ability to lower taxes through life insurance less attractive, lower interest rates that also reduce incentives to shelter investment gains from taxes, and increases in the availability and decreases in the cost of substitutes for the investment component of cash value life insurance." 
It's intriguing to speculate about what the decline in life insurance purchases tells us about our modern attitudes and arrangements toward death, in a time of longer life expectancies, more households with two working adults, the backstops provided by Social Security and Medicare, and perhaps also shifts in how many people feel that their souls are sanctified (in either a religious or a secular sense) by the purchase of life insurance. 

Friday, November 17, 2017

Brexit: Still a Process, Not Yet a Destination

I happened to be in the United Kingdom on a long-planned family vacation on June 23, 2016, when the Brexit vote took place. At the time, I offered a stream-of-consciousness "Seven Reflections on Brexit" (June 26, 2016). But more than year has now passed, and Thomas Sampson sums up the research on what is known and what might come next in "Brexit: The Economics of International Disintegration," which appears in the Fall 2017 issue of the Journal of Economic Perspectives.

(As regular readers know, my paying job--as opposed to my blogging hobby--the Managing Editor of the JEP. The American Economic Association has made all articles in JEP freely available, from the most recent issue back to the first. For example, you can check out the Fall 2017 issue here.)

Here's Sampson's basic description of the UK and its position in the international economy before Brexit. For me, it's one of those descriptions that doesn't use any weighted rhetoric, but nonetheless packs a punch.
"The United Kingdom is a small open economy with a comparative advantage in services that relies heavily on trade with the European Union. In 2015, the UK’s trade openness, measured by the sum of its exports and imports relative to GDP, was 0.57, compared to 0.28 for the United States and 0.86 for Germany (World Bank 2017). The EU accounted for 44 percent of UK exports and 53 percent of its imports. Total UK–EU trade was 3.2 times larger than the UK’s trade with the United States, its second-largest trade partner. UK–EU trade is substantially more important to the United Kingdom than to the EU. Exports to the EU account for 12 percent of UK GDP, whereas imports from the EU account for only 3 percent of EU GDP. Services make up 40 percent of the UK’s exports to the EU, with “Financial services” and “Other business services,” which includes management consulting and legal services, together comprising half the total. Brexit will lead to a reduction in economic integration between the United Kingdom and its main trading partner."
A substantial reduction in trade will cause problems for the UK economy. Of course, the estimates will vary according to just what model is used, and Sampson runs through the main possibilities. He summarizes in this way: 
"The main conclusion of this literature is that Brexit will make the United Kingdom poorer than it would otherwise have been because it will lead to new barriers to trade and migration between the UK and the European Union. There is considerable uncertainty over how large the costs of Brexit will be, with plausible estimates ranging between 1 and 10 percent of UK per capita income. The costs will be lower if Britain stays in the European Single Market following Brexit. Empirical estimates that incorporate the effects of trade barriers on foreign direct investment and productivity find costs 2–3 times larger than estimates obtained from quantitative trade models that hold technologies fixed."
What will come next after Brexit isn't yet clear, and may well take years to negotiate. In the meantime, the main shift seems to be that the foreign exchange rate for the pound has fallen, while inflation has risen, which means that real inflation-adjusted wages have declined. This national wage cut has helped keep Britain's industries competitive on world markets, but it's obviously not a desirable long-run solution.

But in the longer run, as the UK struggles to decide what actually comes next after Brexit, Sampson makes a distinction worth considering: Is the opposition to Brexit about national identity and taking back control, even if it makes the country poorer, or is it about renegotiating trade agreements and other legislation to do more to address the economic stresses created by globalization and technology? He writes:

"Support for Brexit came from a coalition of less-educated, older, less economically successful and more socially conservative voters who oppose immigration and feel left behind by modern life. Leaving the EU is not in the economic interest of most of these left-behind voters. However, there is currently insufficient evidence to determine whether the leave vote was primarily driven by national identity and the desire to “take back control” from the EU, or by voters scapegoating the EU for their
economic and social struggles. The former implies a fundamental opposition to deep economic and political integration, even if such opposition brings economic costs, while the later suggests Brexit and other protectionist movements could be addressed by tackling the underlying reasons for voters’ discontent."
For me, one of the political economy lessons of Brexit is that relatively easy to get a majority against a specific unpopular element of the status quo, while leaving open the question of what happens next. It's a lot harder to get a majority in favor of a specific change. That problem gets even harder when it comes to international agreements, because while it's easy for UK politicians to make pronouncements on what agreements the UK would prefer, trade negotiators in the EU, the US, and the rest of the world have a say, too. Sampson discusses the main post-Brexit options, and I've blogged about them in "Brexit: Getting Concrete About Next Steps" (August 2, 2016). While the process staggers along, this "small open economy with a comparative advantage in services that relies heavily on trade with the European Union" is adrift in uncertainty.

Thursday, November 16, 2017

US Wages: The Short-Term Mystery Resolved

The Great Recession ended more than eight years ago, in June 2009. The US unemployment rate declined slowly after that, but it has now been below 5.0% every month for more than two years, since September 2015. Thus, an ongoing mystery for the US economy is: Why haven't wages started to rise more quickly as the labor market conditions improved? Jay Shambaugh, Ryan Nunn, Patrick Liu, and Greg Nantz provide some factual background to address this question in "Thirteen Facts about Wage Growth," written for the Hamilton Project at the Brookings Institution (September 2017).  The second part of the report addresses the question: "How Strong Has Wage Growth Been since the Great Recession?"

For me, one surprising insight from the report is that real wage growth--that is, wage growth adjusted for inflation--has actually not been particularly slow during the most recent upswing. The upper panel of this figure shows real wage growth since the early 1980s. The horizontal lines show the growth of wages after each recession. The real wage growth in the last few years is actually higher. The bottom panel shows nominal wage growth, with inflation included. By that measure, wage growth in recent years is lower than after the last few recessions. Thus, I suspect that one reason behind the perception of slow wage growth is that many people are focused on nominal rather than on real wages.


Government statistics offer a lot of ways of measuring wage growth. The graphs above are wage growth for "real average hourly earnings for production and nonsupervisory workers," which is about 100 million of the 150 million workers.

An alternative and broader approach looks what is called the Employment Cost Index, which is based on a National Compensation Survey of employers. To adjust for inflation, I use the measure of inflation called the Personal Consumption Expenditures price index, which is the inflation just for the personal consumption part of the economy that is presumably most relevant to workers. I also use the version of this index that strips out jumps in energy and food prices. This is the measure of the inflation rate that the Federal Reserve actually focuses on.

Economists using these measures were pointing out a couple of years ago that real wages seemed to be on the rise. The blue line shows the annual change in wages and salaries for all civilian workers, using the ECI, while the redline shows the PCE measure of inflation. The gap between the two is the real gain in wages, which you can see started to emerge in 2015.

Not only has the recovery in US real wages been a bit higher than usual for the last few decades, and especially prominent in the last couple of years, but there is good reason to believe that the wage statistics since the Great Recession may be picking up a change in the composition of the workforce that tends to make wage growth look slower. Shambaugh, Nunn, Liu, and Nantz explain (citations and footnotes omitted):
"In normal times, entrants to full-time employment have lower wages than those exiting, which tends to depress measured wage growth. During the Great Recession this effect diminished substantially when an unusual number of low-wage workers exited full-time employment and few were entering. After the Great Recession ended, the recovering economy began to pull workers back into full-time employment from part-time employment ... and nonemployment, while higher-paid, older workers left the labor force. Wage growth in the middle and later parts of the recovery fell short of the growth experienced by continuously employed workers, reflecting both the retirements of relatively high-wage workers and the reentry of workers with relatively low wages. In 2017 the effect of this shifting composition of employment remains large, at more than 1.5 percentage points. If and when growth in full-time employment slows, we can expect this effect to diminish somewhat, providing a boost to measured wage growth."
The baby boomer generation is hitting retirement and leaving the labor force, as relatively highly-paid workers at the end of their careers. New workers entering the labor force, together with low-skilled workers being drawn back into the labor force, tend to have lower wages and salaries. This makes wage growth look low--but what's happening is in part a shift in types of workers. 

One other fact from Shambaugh, Nunn, Liu, and Nantz is that wage growth has been strong at the bottom and the top of the wage distribution, but slower in the middle. This figure splits the wage distribution into five quintiles, and shows the wage growth for production and nonsupervisory workers in each. 

Taking these factors together, the "mystery" of why wages haven't recovered more strongly since the end of the Great Recession appears to be resolved. However, a bigger mystery remains. Why have wages and salaries for production and nonsupervisory workers done so poorly not in the last few years, but over the last few decades?

There's a long list of potential reasons: slow productivity growth, rising inequality, dislocations from globalization and new technology, a slowdown in the rate of start-up firms, weakness of unions and collective bargaining, less geographic mobility by workers, and others. These factors have been discussed here before, and will be again, but not today. Shambaugh, Nunn, Liu, and Nantz provide some background figures and discussion of these longer-term factors, too. 

Wednesday, November 15, 2017

Rethinking Development: Larry Summers

Larry Summers delivered a speech on the subject of "Rethinking Global Development Policy for the 21st Century" at the Center for Global Development on November 8, 2017. A video of the 45-minute lecture is here. Here are a few snippets, out of many I could have chosen:

The dramatic global convergence between rich and poor
"There has been more convergence between poor people in poor countries and rich people in rich countries over the last generation than in any generation in human history. The dramatic way to say it is that between the time of Pericles and London in 1800, standards of living rose about 75 percent in 2,300 years. They called it the Industrial Revolution because for the first time in human history, standards of living were visibly and 2 meaningfully different at the end of a human lifespan than they had been at the beginning of a human lifespan, perhaps 50 percent higher during the Industrial Revolution. Fifty percent is the growth that has been achieved in a variety of six-year periods in China over the last generation and in many other countries, as well. And so if you look at material standards of living, we have seen more progress for more people and more catching up than ever before. That is not simply about things that are material and things that are reflected in GDP. ... [I]f current trends continue, with significant effort from the global community, it is reasonable to hope that in 2035 the global child mortality rate will be lower than the US child mortality rate was when my children were born in 1990. That is a staggering human achievement. It is already the case that in large parts of China, life expectancy is greater than it is in large parts of the United States." 

The marginal benefit of development aid is what is enabled, not what is funded
"I remember as a young economist who was going to be the chief economist of the World Bank sitting and talking with Stan Fischer, who was my predecessor as the chief economist of the World Bank. And we were talking, and I was new to all this. I had never done anything in the official sector. And I said, "Stan, I don't get it. If a country has five infrastructure projects and the World Bank can fund two of them, and the World Bank is going to cost-benefit analyze and the World Bank is going to do all its stuff, I would assume what the country does is show the World Bank its two best infrastructure projects, because that will be easiest, and if it gets money from the World Bank, then it does one more project, but what the World Bank is actually buying is not the project it is being shown, it is the marginal product that it is enabling. And so why do we make such a fuss of evaluating the particular quality of our projects?" And Stan listened to me. And he looked at me. He's a very wise man. And he said, "Larry, you know, it is really interesting. When I first got to the bank, I always asked questions like that." "But now I've been here for two years, and I don't ask questions like that. I just kind of think about the projects, because it is kind of too hard and too painful to ask questions like that."
Funds from the developing world governments and multilateral institutions have much less power
"[O]ur money—and I mean by that our assistance and the assistance of the multilateral institutions in which we have great leverage—is much less significant than it once was. Perhaps the best way to convey that is with a story. In 1991, when I was new to all of this, I was working as the chief economist of the World Bank, and the first really important situation in which I had any visibility at all was the Indian financial crisis that took place in the summer of 1991. And at that point, India was near the brink. It was so near the brink that, at least as I recall the story, $1 billion of gold was with great secrecy put on a ship by the Indians to be transported to London, where it could be collateral for an emergency loan that would permit the Indian government to meet its payroll at the end of the month.  And at that moment, the World Bank was in a position over the next year to lend India $3 billion in conjunction with its economic reform program. And the United States had an important role in shaping the World Bank's strategy. Well, that $3 billion was hugely important to the destiny of a sixth of humanity. Today, the World Bank would have the capacity to lend India in a year $6 billion or $7 billion. But India has $380 billion—$380 billion—in reserves dominantly invested in Treasury bills earning 1 percent. And India itself has a foreign aid budget of $5 billion or $6 billion. And so the relevance of the kind of flows that we are in a position to provide officially to major countries is simply not what it once was."
Protecting the world from pandemic flu vs. the salary of a college football coach
"[T]he current WHO budget for pandemic flu is less than the salary of the University of Michigan's football coach—not to mention any number of people who work in hedge funds. And that seems manifestly inappropriate. And we do not yet have any settled consensus on how we are going to deal with global public goods and how that is going to be funded."

Tuesday, November 14, 2017

Regional Price Parities: Comparing Cost of Living Across Cities and States

Many years ago I heard a story from a member of a committee of a midwestern university that was thinking about hiring a certain economist. The economist had an alternative offer from a southern California university that paid a couple of thousand dollars more in annual salary. The economist offered to come to the midwestern university if it would match this slightly higher salary . But the hiring committee declined to match . As the story was told to me, the hiring committee talked it over and felt: "Spending a couple of thousand dollars more isn't actually the issue. The key fact cost of living is vastly higher in southern California. An economist who isn't able to recognize that fact--and thus who doesn't recognize that the lower salary actually buys a higher standard of living here in the midwest--isn't someone we want for our department."

The point is a general one. Getting a higher salary in California or New York, and then needing to pay more for housing and perhaps other costs of living as well, can easily eat up that higher salary. In fact, the Bureau of Economic Analysis now calculates Regional Price Parities, which adjust for higher or lower levels of housing, goods, and services across areas. Comparisons are available at the state level, the metropolitan-area level, and for non-metro areas within states. To illustrate, here are a couple of maps taken from "Living Standards in St. Louis and theEighth Federal Reserve District: Let’s Get Real," an article by Cletus C. Coughlin, Charles S. Gascon, and Kevin L. Kliesen in the Review of the Federal Reserve Bank of St. Louis (Fourth Quarter 2017, pp. 377-94).

Here are the US states color-coded according to per capita GDP. For example, you can see that California and New York are in the highest category. My suspicion is that states like Wyoming, Alaska, and North Dakota are in the top category because of their energy production.



And now here are the US states color-coded according to per capita GDP with an adjustment for Regional Price Parities: that is, it's a measure of income adjusted for what it actually costs to buy housing and other goods. With that change, California, New York, and Maryland are no longer in the top category. Hoever, a number of midwestern states like Kansas, Nebraska, South Dakota, and my own Minnesota move into the top category. A number of states in the mountain west and south  that were in the lowest-income category when just looking at per capita GDP move up a category or two when the Regional Price Parities are taken into account.


When thinking about political and economic differences across states, these differences in income levels,  housing prices, and other costs-of-living are something to take into account. 

Monday, November 13, 2017

Choice and Health Insurance Coverage

If you think if Medicare and Medicaid as examples of "single payer" health insurance plan, you are at best partially correct. Government health spending (including federal, state, and local) does accounts for about 46% of total US health care spending.  However, a major and largely unremarked change is that government health care spending is being filtered through a system in which those receiving the government health insurance need to make choices between privately-run health insurance plans.

A three-paper symposium in the Fall 2017 issue of the Journal of Economic Perspectives tackles this issue of choice and health insurance coverage. The introductory essay by Jonathan Gruber is called "Delivering Public Health Insurance through Private Plan Choice in the United States."
Then  Michael Geruso and Timothy Layton focus on the issue of "Selection in Health Insurance Markets and Its Policy Remedies," while Keith Marzilli and Justic Sydnor focus on the issue of how difficult it can be for consumers to make wise choices between health insurance plans--especially when the providers of these plans may have incentive to slant those choices in certain directions in "The Questionable Value of Having a Choice of Levels of Health Insurance."  For example, Gruber describes how US government health care spending has moved away from a "single payer" approach over time, and writes:
"Currently, almost one-third of Medicare enrollees are in privately provided insurance plans for all of their medical spending, and another 43 percent of Medicare enrollees have standalone private drug plans through the Medicare Part D program. More than three-quarters of Medicaid enrollees are in private health insurance plans. Those receiving the subsidies made available under the Patient Protection and Affordable Care Act of 2010 do so through privately provided insurance plans that are
reimbursed by the government."
Or here's a figure from Geruso and Layton. When you take into account the people choosing between Medicaid managed care plans, Medicare "Advantage" plans (as part of Medicare Part C), Medicare prescription drug benefits (as part of Medicare Part D), and people choosing between health insurance plans in the insurance "marketplaces" set up by the Patient Protection and Affordable Care Act of 2010, you have a total of nearly 100 million enrollees. Of course, if you're looking at choice in health insurance more broadly, many individual also have some choices in the the health insurance plans supported by their employers, too.
In all insurance markets, not just health insurance, choice can be a double-edged sword. On one side, choice lets people match up the characteristics of different health insurance care plans to their personal preferences and needs, which clearly can be positive. But health insurance providers here have mixed incentives: in this choice-based health insurance universe, they want to encourage people to choose their plans, but they also are trying not to attract disproportionate numbers people who are more likely to have high health care costs in the future. Health insurance plans have a very wide array of characteristics: not just the structure of deductibles, copayments, and annual caps, but also including limits on the breadth of a provider network and how costly (in terms of out-of-pocket costs) or difficult (in terms of paperwork and delay) it can be to go outside that network. Another limit can be on what types of care are covered in extreme health situations.  With these difficulties in mind, a number of conventional problems arise.

Health insurance market will have a tendency to sort people into groups, where those who regard themselves as healthy at present will seek out health insurance that covers less and has a lower cost, while those who know that they are likely to have higher health-care costs will tend to seek out insurance that covers more but has a higher cost. As this dynamic emerges, so to a number of problems:

Insurance companies will have an incentive to structure their insurance plans with the idea of  attracting the more-healthy consumers, while encouraging less health consumers to shop elsewhere, which is sometimes known as "cream-skimming." Health insurance plans that would tend to be more attractive for the less healthy will tend to be packed full of out-of-pocket costs and restrictions on the network of service providers. At an extreme, health insurance plans suitable for those with high costs may become so costly or limited as to be essentially unavailable, which of course defeats the purpose of insurance altogether, which is sometimes known as "death spiral" for that market. Some of the  people who signed up for lower-cost plans, either because they expected to be healthy or just because they focused on the low costs, will instead turn out to be unhealthy--and discover that their low-cost plan provided only limited coverage.

Of course, these are exactly the issues that have been playing out in the state-level insurance "marketplaces" set up under the Affordable Care Act. Economic analysis points out that these kinds of issues are endemic to choice-based insurance markets. These problems lead to a parade of policy interventions in health insurance markets, laid out by Geruso and Layton.

There are often rules for "premium rating," which limits the price differences between insurance plans for different groups, or rules that insurance companies cannot reject an applicant outright, but must offer some kind of plan. These rules seek to avoid the problem that a consumer who is likely to have health care costs can't find an insurance policy at all, but given the many ways in which health insurance can be structured, the available policies can still look rather scanty.

The government can impose penalties for not purchasing health insurance, or subsidies for buying it. In practice, the state-level health insurance marketplaces do both of these.

"Risk adjustment" refers to the situation which a statistical formula is used to predict who is likely to have higher or lower health insurance costs--so that the government pays  that amount to the insurance company.  For example, in the Medicare Advantage program, where Medicare recipients can choose among private insurance plans rather than the government single-payer approach, the government needs to avoid a situation where the private health insurance firms just attract the healthier participants, and so it uses a risk adjustment formula. The evidence is that this risk adjustment is imperfect, in the sense that the higher payments for those expected-to-be-sick don't quite account for the higher costs, but it's better than not having it at all. Medicaid and the state-level insurance marketplaces have risk adjustment procedures, too.

Yet another policy is "contract regulation," to require that insurance firms offer certain benefits. Of course, the question of what coverage is required, and the extent to which firms can require additional payments or limit the providers for certain kinds of coverage, remain controversial.

The bottom line here is that choice in health insurance markets unleashes both good and distressing dynamics. The good dynamic is people who can select the plan that they think best suits their immediate needs, and to some extent it focuses insurance companies on providing what people actually want. The distressing dynamic is that as people do this, the health insurance market for those who need more extensive health insurance will stagger for all the reasons given above. The available public policies that seek to address this issue--premium rating, penalties/subsidies to encourage  buying insurance, risk adjustment, and contract regulations--all have understandable underlying purposes. But they add a great deal of complexity to an already messy market, and only partially address the underlying problems.

The ongoing US shift in how public health insurance is increasingly provided through private health insurance firms should influence the discussion over a "single payer" approach to health care.

Traditionally, the term "single payer" has referred to direct government payments to health care providers. In this sense, a true advocate of "single payer" in the traditional meaning cannot advocate "Medicare for all," at least not as Medicare is currently constructed, because a large part of Medicare (both the choice section in Part C and the pharmaceutical benefits Part D) is no longer a single-payer system in the traditional meaning of the term. Similarly, an expansion of Medicaid is largely an expansion of government paying health care providers directly. A supporter of "single payer" should presumably oppose both the state-level insurance "marketplaces," as well as the provision of private-sector health insurance.

Conversely, those who oppose "single payer" should contemplate whether their concerns about government control over health care are at ameliorated to some extent if the beneficiaries of those programs have a degree of choice across health insurance firms and health providers--albeit in regulated markets.

Saturday, November 11, 2017

Decline in US Mail Leveling Out

The US Postal Service handles over 150 billion pieces of mail each year, which is about 47% of all mail sent in the world. But it has faced financial troubles for years, in part because it is caught in a political vice of that limits its flexibility to make adjustments that could trim costs, and in part because the internet has taken a bite out of mail service. The Office of the Inspector General of the US Postal Service describes some trends in "What’s up with Mail? How Mail Use Is Changing across the United States" (RARC-WP-17-006. April 17, 2017).

Here are volumes of mail-sent-per-adult for three categories that make up over 90 percent of the volume of what is delivered by the USPS: single-piece first class mail, first-class mail presorted, and marketing mail.

Single-piece first-class mail per adult started dropping in 1996, and has fallen by 70% since then.

First-class mail in the presorted category (which is more likely to be mailings sent by firms or government to consumers) continued to rise up until the Great Recession, but has declined by about one-third since then. .

Marketing mail dropped in the Great Recession, and is now down by more than one-quarter from 2007 levels, but its decline has been much smaller in recent years. As the report notes: "Marketing Mail is also playing an increasingly prominent role in the Postal Service’s product portfolio. At approximately 80 billion pieces, Marketing Mail volume is higher than FCM-SP and FCM-Presort combined. In 2015, it made up about 52 percent of total mail volume."
In part, I find these patterns interesting as a reflection of how America communicates, and how the ease and convenience of web-based communication has affected the postal service. 

But in addition, the report notes that the rate of decline in mail use seems to have slowed in the last few years. I've written in the past about steps that the US Postal Service could take to improve its financial outlook,  and I won't repeat that here. After all, the possibilities for innovative change at the Postal Service have been strangled by political infighting which led to a situation in which, by the end of 2016 and since then, none of the nine appointed slots on the Board of Governors of the Postal Service are filled. 

But if the quantity of these core lines of the mail business are not falling as fast, while "packages have become an increasingly prominent product for the Postal Service, with volume growing 68 percent to 5.2 billion pieces between 2009 and 2016," it becomes more feasible to think about how to restructure and right-size the Postal Service in a sensible way.

Friday, November 10, 2017

The Darker Side of Peer-to-Peer Lending

Peer-to-peer lending refers to an economic transaction in which individual investors lend directly to individual borrowers using online platforms. Yuliya Demyanyk, Elena Loutskina, and Daniel Kolliner illuminate the darker side of such arrangements in "Three Myths about Peer-to-Peer Loans," written as an "Economic Commentary" for the Federal Reserve Bank of Cleveland (November 9, 2017). Their more detailed research paper on the topic is available here. In the "Economic Commentary" piece, the summarize this way: 
"Peer-to-peer (P2P) lending came to the United States in 2006, when individual investors began lending directly to individual borrowers via online platforms. In the decade since, the industry has grown dramatically ...  Online lenders and policymakers have suggested that the P2P market offers unique benefits to consumers. Three benefits are often repeated and seem to have become widely accepted. First, P2P loans allow consumers to refinance expensive credit card debt. Second, P2P loans can help customers build their credit history and improve their credit scores. Finally, P2P proponents claim that P2P lending extends access to credit to those who are underserved by traditional banks. 
"But signs of problems in the P2P market are appearing. Defaults on P2P loans have been increasing at an alarming rate ...  We exploit a comprehensive set of credit bureau data to examine P2P borrowers, their credit behavior, and their credit scores. We find that, on average, borrowers do not use P2P loans to refinance preexisting loans, credit scores actually go down for years after P2P borrowing, and P2P loans do not go to the markets underserved by the traditional banking system. Overall, P2P loans resemble predatory loans in terms of the segment of the consumer market they serve and their impact on consumers’ finances. Given that P2P lenders are not regulated or supervised for antipredatory laws, lawmakers and regulators may need to revisit their position on online lending marketplaces."

 The P2P sector is actually misnamed. As one might have predicted, it very quickly because a market where the supply of loans is not coming from individuals, but rather from institutions like "hedge funds, banks, insurance companies, and asset managers." The amount loaned doubled from 2012 to 2016, and now exceeds $100 billion.

The authors have gained access to some useful data:
"We use data from the TransUnion credit bureau, in which we observe about 90,000 distinct individuals who received their first P2P loan between 2007 and 2012. We also observe about 10 million individuals who did not receive P2P loans and whom we label non-P2P individuals. Using a statistical technique called propensity score matching, we identify non-P2P individuals who are financially similar to P2P individuals during the two years prior to the date on which P2P individuals obtained their P2P loan. We match individuals based on the location of their residence, their credit score, their total debt, their income, their number of delinquencies in the past two years, and whether or not they have a mortgage."

Thus, the authors can compare those who take out a P2P loan to a group with similar financial characteristics, and consider whether 1) they have been more successful in reducing their debt burden after a year or two (they haven't); 2) they have been more successful in building up their credit score (they haven't); and 3) they are a group that was less likely to have access to bank loans and other credit before (they aren't).

In a broader view, it's also troubling that each year, even though the economy has been experiencing a mild recovery, the P2P loans seem to be getting riskier. Here is the delinquency rate on P2P loans after one and two years. Each line shows the year in which the loan was made. The delinquency rates are rising over time.

It's useful to be clear on the potential policy problem here. I'm not concerned about the institutions that make P2P loans: they are regulated by the Securities and Exchange Commission, and they can look after themselves. Lots of borrowers seem to be taking on a P2P loan thinking that it's a first step to paying down their existing debt, but for the group as a whole, this expectation isn't being met. If a financial market is in some danger of melting down in a way that could take a few million borrowers along with it--with all the stresses of wage garnishment, charging higher fees for missed payments, property liens, even bankruptcy--that's a public policy problem.

Thursday, November 9, 2017

The Macroeconomy in Ongoing Transition: Mervyn King

Mervyn King delivered a provocative and intriguing 2017 Martin Feldstein Lecture at the National Bureau of Economic Research on the subject of "Uncertainty and Large Swings in Activity" (July 19, 2017). A written version of the presentation is available in the NBER Reporter (2017: 3, pp. 1-10), or you can watch the lecture and download the slides here.

King's argument has both a broad conceptual message for the study of macroeconomics, which is that it is literally impossible to demonstrate with statistics that a certain macroeconomic model is "true." After all, drawing statistical conclusions requires a decent sample size. But to get a sample size of, say, 20 or 30 recessions in a given economy would take a long time--perhaps several centuries--and it is not plausible that any macroeconomic model remains "true" over that length of time. As King puts it (footnotes omitted):
"Let me give a simple example. It relates to my own experience when, as deputy governor of the Bank of England, I was asked to give evidence before the House of Commons Select Committee on Education and Employment on whether Britain should join the European Monetary Union. I was asked how we might know when the business cycle in the U.K. had converged with that on the Continent. I responded that given the typical length of the business cycle, and the need to have a minimum of 20 or 30 observations before one could draw statistically significant conclusions, it would be 200 years or more before we would know. And of course it would be absurd to claim that the stochastic process generating the relevant shocks had been stationary since the beginning of the Industrial Revolution. There was no basis for pretending that we could construct a probability distribution. As I concluded, `You will never be at a point where you can be confident that the cycles have genuinely converged; it is always going to be a matter of judgment.'"
In the current economic context, King takes aim at the macroeconomic perspective which argues that we had a pretty good model of the macroeconomy for the decades leading up to the Great Recession, but the model has broken down since then. The dashed line in the figure shows a trendline for growth of GDP per capita from 1960-2016. For the US economy, you can project that trendline backward to 1900: as I noted a few years ago, long-run US economic growth had a remarkable consistency from the late 19th century up through about 2010. However, the divergence from this long-run path in the aftermath of the Great Recession is quite noticeable. The trendline for the United Kingdom data doesn't project backward as well, but it does show a similar divergence from that trend in recent years.



Looking at the economy as represented in this figure, one might plausibly argue that the macroeconomy can be modeled by a fairly steady long-run trend, with some up-and-down fluctuations of recessions and recoveries around that trend. However, King suggests that this appearance is misleading. Instead, the world economy saw a dramatic shift starting in the mid-1990s that has continued since then, which can be seen in the pattern of real interest rates over time. King says: 
"From around the time when China and the members of the former Soviet Union entered the world trading system, long-term real interest rates have steadily declined to reach their present level of around zero. Such a fall over a long period is unprecedented. ... [M]uch effort has been invested in the attempt to explain why the "natural" real rate of interest has fallen to zero or negative levels. But there is nothing natural about a negative real rate of interest. It is simpler to see Figure 3 as a disequilibrium phenomenon that cannot persist indefinitely."


In King's view, the world economy is still adjusting to this shift, which has a number of components. High savings rates in China and Germany have helped to drive down real interest rates. Moreover, we have moved into a world economy where some countries have seemingly perpetual trade surpluses while others have seemingly perpetual trade deficits. King writes: 
"Both the U.S. and U.K. had substantial current account deficits, amounting in aggregate to around $600 billion, and China and Germany had correspondingly large current account surpluses. All four economies need to move back to a balanced growth path. But far too little attention has been paid to the problems involved in doing that. With unemployment at low levels, the key problem with slower-than-expected growth is not insufficient aggregate demand but a long period away from the balanced path, reflecting the fact that relative prices are away from their steady-state levels. The result is that the shortfall of GDP per head relative to the pre-crisis trend path was over 15 percent in both the U.S. and U.K. at the end of last year. Policies which focus only on reducing the real interest rate miss the point; all the relevant relative prices need to change, too." 
In short, King is offering an alternative diagnosis of our current slow-growth woes. In his view, the slow growth, it's not due to lingering hangover from the high debt burdens that preceded the Great Recession, nor is it due to a decline in technological opportunities, or to a shortfall in investment related to "secular stagnation." Instead, King argues that what needs to happen is a shift in global prices in the sectors of tradeable and nontradeable goods.

I'm adding King's explanation to my list of mental possibilities for what forces are underlying the slow productivity growth in the US economy.  But in addition, it's worth adding a dose of King-size skepticism about economists who arrive at any macroeconomic situation with a given model fixed in their minds, rather than trying to figure out which model is most likely to apply in a given case. King notes:
"Imagine that you had a problem in your kitchen, and summoned a plumber. You would hope that he might arrive with a large box of tools, examine carefully the nature of the problem, and select the appropriate tool to deal with it. Now imagine that when the plumber arrived, he said that he was a professional economist but did plumbing in his spare time. He arrived with just a single tool. And he looked around the kitchen for a problem to which he could apply that one tool. You might think he should stick to economics. But when dealing with economic problems, you should also hope that he had a box of tools from which it was possible to choose the relevant one. And there are times when there is no good model to explain what we see. The proposition that `it takes a model to beat a model' is rather peculiar. Why does it not take a fact to beat a model? And although models can be helpful, why do we always have to have one? After the financial crisis, a degree of doubt and skepticism about many models would be appropriate."

Wednesday, November 8, 2017

A Range of International Poverty Lines

Poverty is inevitably a relative phenomenon; that, whether you are "poor" depends on the typical standard of living in your society. For example, the World Bank has used a poverty line of $1.90 per person per day since 2015. If you multiplied this poverty line by a family of 3, for 365 days in a year, it equates to an annual poverty line of $2,080 per year for that family. For comparison, the US poverty line in 2016 for a three-person family with a parent and two children would be $19,337.

It would take some odd mixture of clueless, heartless, and moral blindness to argue that poverty in the United States or other high-income countries should be defined in the same way as in low-income countries. But by similar logic, it seems unsuitable to use the same poverty line for what the World Bank would classify as "low-income" countries with a per capita GDP of less than $1,005 per year (for example, Afghanistan, Ethiopia, and Haiti), "lower middle income" countries with a per capita GDP between $1,006 TO $3,955 (like Bangladesh, Nicaragua, and Nigeria), and "upper middle-income" countries with a per capita GDP from $3,956 TO $12,235 (like Mexico, China,and Turkey). Thus, the World Bank is now planning to use "A Richer Array of Poverty Lines," in the words of Franciscon Ferreira.

The figure shows per capita income on the horizontal axis, with the groups of countries separated by income level. The corresponding poverty line for each country as determined by that country is plotted on the vertical axis. The horizontal line shows an average poverty line for the countries within that income group.

The underlying data for national poverty lines is from an article by Dean Jolliffe and Espen Beer Prydz, "Estimating international poverty lines from comparable national thresholds," which appeared in the Journal of Economic Inequality (2016, 14, pp. 185-198). An ungated version is available from the World Bank here.

Tuesday, November 7, 2017

Trade, Technology, and Job Disruption

Both technological developments and international trade can disrupt an economy, and in somewhat similar ways, but many people have very different reactions to these forces. To illustrate the point, I sometimes pose this question:

There's a US company which has developed a new technology that allows them to make a certain product more cheaply. This company hires some additional workers, but the other firms trying to make that same product don't have the technology, so they lay off workers or even go bankrupt. Should step be taken to ban or limit the use of this new technology?

Pause for thought. The usual reaction that emerges from the discussion is that we can't hope to freeze technology in place. Ultimately, we don't want to be a society with lots of workers who light gas streetlamps, or who operate telegraphs or who plow fields with oxen. Sure, it's important to have social policies to cushion the transition to new industries, but overall, we need to be facilitating new technology rather than blocking it.

All of which is fair enough, but here's the kicker. Now you discover that the "new technology" from the US firm is that it is importing more cheaply from a foreign provider. The same disruption of the US  labor force is occurring, but as a result of an expansion of international trade rather than as a result of technology. Personally, my response to the economic disruption of trade is essentially same as my response to the economic disruption of technology: that is, I believe in assisting the transition for dislocated workers no matter the reason behind the dislocation. But for many people, their reaction to economic disruption is different depending on whether the underlying cause is technology or trade.

There arguments are renewed and refreshed in a couple of recent publications. J. Bradford DeLong has written "When Globalization Is Public Enemy Number One" in the most recent issue of the Milken Institute Review (Fourth Quarter 2017, 19:4, pp. 22-31). Also, the World Trade Report 2017 from the World Trade Organization is centered on the theme, "Trade, Technology, and Jobs."

As a starting point, here's a figure from DeLong's paper about the rise of globalization. The red line shows the sum of exports and imports compared to world GDP. The first explosion of globalization starting in the 19th century, and the more recent rise of globalization, are both readily apparent.
Delong Bradford Chart1
But of course, a rise in trade isn't the only economic change taking place. Brad points out that the fall in blue-collar and manufacturing jobs was well underway back in the 1950s and 1960s, well before globalization had restarted in force--because of changes in technology  Indeed, I've written before about "Automation and Job Loss: The Fears of 1964" (December 1, 2014). Brad readily admits that the shock of increased trade with China starting around 2001 was an important event, and of course the Great Recession had a powerful effect on jobs too. But overall, he writes:

by his calculations. only a very minor part of the decline in blue-collar jobs since 1948 is about international trade: it's mostly about technological change, and to some extent about the rising strength of economies in other parts of the world and misjudgments of macroeconomic policy by the US government.
"To repeat, because it bears repeating: globalization in general and the rise of the Chinese export economy have cost some blue-collar jobs for Americans. But globalization has had only a minor impact on the long decline in the portion of the economy that makes use of high-paying blue-collar labor traditionally associated with men. ... Pascal Lamy, the former head of the World Trade Organization, likes to quote China’s sixth Buddhist patriarch: `When the wise man points at the moon, the fool looks at the finger.' Market capitalism, he says, is the moon. Globalization is the finger."
Given that comment from Lamy, it is perhaps unsurprising that the World Trade Report 2017 takes a position similar to DeLong. There are roughly a jillion examples of how technology both improves productivity but also can also disrupt job markets. The report summarizes:
"By making some products or production processes obsolete, and by creating new products or expanding demand for products that are continuously innovated, technological change is necessarily associated with the reallocation of labour across and within sectors and firms. Such technology-induced reallocations affect workers differently, depending on their skills or on the tasks they perform. ICTs tend to be used more intensively and more productively by skilled workers than by unskilled workers. Automation tends to affect routine activities more than non-routine activities, because machines still do not perform as well as humans when it comes to dexterity or communication skills. ... [T]he labour market effects of technology are relatively more favourable to skilled workers and to workers performing tasks that are harder to automate."
What about the worry that technology will lead to a dramatic reduction in the total number of jobs? Obviously, this prediction is not an extrapolation from history. The US and world economy have been experiencing technological growth in a serious way for a couple of centuries, and there is no long-run downward trend in the total number of jobs. Why is that? The report offers these reasons (citations
omitted):

"The view that the new technological advances in artificial intelligence and robotics will not lead to a `jobless future' is based on historical experience. Although each wave of technological change has generated technological anxiety and led to temporary disruptions with the disappearance of some tasks and jobs, other jobs have been modified, and new and often better jobs have eventually been developed and filled through three interrelated mechanisms.

"First, new technological innovations still require a workforce to produce and provide the goods, services and equipment necessary to implement the new technologies. Recent empirical evidence suggests that employment growth in the United States between 1980 and 2007 was significantly greater in occupations encompassing more new job titles. 
"Second, the new wave of technologies may enhance the competitiveness of firms adopting these technologies by increasing their productivity. These firms may experience a higher demand for the goods or services they produce, which could imply an increase in their labour demand. Several empirical studies ... find that the adoption of labour-saving technologies did not reduce the overall labour demand in European countries and other developed economies. 
"Finally, ... the upcoming technological advances may complement some tasks or occupations and therefore increase labour productivity, which could lead to either higher employment or higher wages, or both. The new workers and/or those benefitting from a pay rise may increase their consumption spending, which in turn tends to maintain or raise the demand for labour in the economy. Recent empirical evidence suggests that the use of industrial robots at the sector level has led to an increase in both labour productivity and wages for workers in Australia, 14 European countries, the Republic of Korea and the United States."
It's of course impossible to prove that future patterns will be similar. But the historical evidence suggests that finding ways to stimulate and work with technology is a better path to prosperity than trying to limit or block it.

In the discussion of trade and jobs, the report readily admits that trade (like technology) causes economic change and dislocation. After a substantial discussion of the empirical evidence, here are some conclusions from the report:
"First, evidence consistently shows that the welfare gains from trade are considerably larger than the costs. Effects on aggregate employment are minor and tend to be positive. The net effect on welfare depends on the magnitude of adjustment costs and trade gains. But existing evidence evaluates costs to be just a fraction of the gains.
"Second, the debate over the labour market effects of import competition needs to be qualified. While some manufacturing jobs may be lost in some local labour markets, other jobs may be created in other zones or in the services sector. When researchers take these effects into account their findings suggest a positive overall effect of trade on employment. Similar results are found when input-output linkages are taken into account or when the response of the labour supply to increased real wages is accounted for. Clearly, those who lose jobs because of import competition are not necessarily the same workers who get new jobs in exporting firms, because they are likely to have different skillsets or limited labour mobility. These adjustment costs need to be taken into account, but without losing sight of the overall picture. 
"Third, there is evidence that export opportunities are associated with employment growth. In developing countries, improved access to foreign markets has contributed to the movement of workers away from agriculture and towards services and manufacturing, as well as away from household businesses toward  firms in the enterprise sector, and away from state-owned firms toward private domestic and foreign-owned firms. Although more should be done to understand how labour markets in least-developed countries (LDCs) are affected by trade opening, there is evidence that the involvement of LDCs in GVCs [global value chains] has been a vehicle for developing employment opportunities.
"Fourth, trade offers opportunities for better-paid jobs. A significant share of jobs is related to trade, either through exports or imports, and both exporters and importers pay higher wages. This is because trading is a skills-intensive activity. International trade requires the services of skilled workers, who can ensure compliance with international standards, manage international marketing and distribution, and meet the demanding standards of customers from high-income countries; and trade leads to the selection of more productive firms and provides firms with an incentive to upgrade their technology. There is evidence that better access to foreign markets benefits exporting firms and thus their workers. This in turn positively affects regions where these firms are located, as well as occupations that are intensively used by these firms.
"As regards the evidence on the impact of trade on wage dispersion, there is evidence that by increasing the demand for skills, trade contributes to wage differences between high- and low-skilled workers. ... It is also worth noting that most of the existing analysis fails to account for the fact that most of the gains from trade opening come through a reduction in prices. Workers are also consumers. Trade impacts their well-being not only through changes in the wage received, but also through changes in the price of the goods that they consume. Given that most of the gains from trade opening through the consumption channel accrue to lower-income groups, failing to account for the income-group specific price changes overestimates the impact on wage disparity."

For some additional discussion of concerns that technology (or trade) would decimate the number of jobs, see:



Friday, November 3, 2017

How Food Banks Use Markets

"Imagine that someone gave you 300 million pounds of food and asked you to distribute it to the poor—through food banks—all across the United States. The nonprofit Feeding America faces this problem every year. The food in question is donated to Feeding America by manufacturers and distributors across the United States. As an example, a Walmart in Georgia could have 25,000 pounds of excess tinned fruit at one of its warehouses and give it to Feeding America to distribute to one of 210 regional food banks. How should this be accomplished?"

Contemplate your answer for a moment. Canice Prendergast discusses how Feeding America used to tackle this problem, and how it switched to a market-oriented solution, in "How Food Banks Use Markets to Feed the Poor," which appears in the Journal of Economic Perspectives (Fall 2017, 31:4, pp. 145-62). 

One piece of context that is useful here is that the 210 regional food banks all have local donors, and they typically get a majority of food from those donors. The question here is how to allocate the additional food donations received at the national level. Here's how Prendergast describes the earlier system: 
"Until 2005, Feeding America had a method of allocating resources that is fairly common among not-for-profits: a “wait your turn” system, where it gave out food based on a food bank’s position in a queue. The queue was determined by the amount of food that a food bank had received compared to a measure of need called the “Goal Factor,” which is (roughly) the number of poor in a food bank’s area compared to the national average. The formula is more nuanced than a simple head count, as it distinguishes between usage rates for those below the poverty line, between 100 and 125 percent of the poverty line, and between 125 and 185 percent. When a food bank’s position in the queue was high enough, it would receive a call or email from Feeding America to say that it had been assigned a “load.” The load had to be collected from the donor, and food banks were (and remain) liable for transportation costs. The food bank had 4–6 hours to say “yes” or “no.” After a food bank was offered food, its position in the queue would be recalculated, as its measure of food received relative to need would change. If it turned down the offer, the load would go to the next food bank in the queue. This mechanism had been used since the late 1980s, and it allocated 200–220 million pounds of food each year from 2000 to 2004. Feeding America did not distinguish much between different kinds of food, so that each food bank on average got a similar product mix from them (though randomly a food bank could get lucky or unlucky in whether it would get food that was popular among participants)."
The rationale for a system like this one is pretty clear, and so where the practical difficulties. For example, it was quite possible for a food bank in Idaho to get offered a large donation of potatoes, when it already had lots in stock.  It could take a few days to work out who might get a certain donation of, say, fresh produce--in which time it could spoil. Some food banks have lots of local donors, while others do not, but the Goal Factor approach--based on number of poor people in the area--doesn't take this into account. And so on. 

Feeding America put together a committee to consider alternatives. "The group consisted of eight food bank directors, three staff from Feeding America, and four University of Chicago faculty." Prendergast gives gives a tone of the early interactions, when the Chicago faculty started talking about market approaches, in this way:
John Arnold, a member of the redesign group who was for many years Director of the Feeding America Western Michigan Food Bank said to me once near the start of the process: “I am a socialist. That’s why I run a food bank. I don’t believe in markets. I’m not saying I won’t listen, but I am against this.’’
 This situation is clearly not one in which a pure cash-based market is going to serve the desired function. But the committee came up with a market-related approach, which it called the Choice System.  An internal currency called "shares" was created, which were given to food banks using the same Goal Factor criterion.

Now what happens is that Feeding America holds an internal sealed-bid auction twice a day, Monday through Friday, at 12 and 4 o-clock. On a typical day, there can be 50 truckloads of food donated, 25,000 pounds apiece. The loads are posted online at least two hours before the auction.

The practical advantages of this approach are manifold, and here are a few of them

  • Food banks can bid on the specific things they need, rather than being offered stuff they don't need. For example, food banks often put a higher value on dry goods that will last well,  like cereal or pasta, or on supplies like disposable plates and tableware. 
  • Food banks have some ability to borrow "shares," or for several smaller food banks to bid jointly. 
  • If a food bank has extra local donations, it can offer them to the Choice System and receive additional shares. 
  • Food banks can bid more on donations that are geographically close to them (remember, the food bank is responsible for transportation costs). 
  • If a food bank doesn't need anything that is being donated right now, it doesn't bid, and carries its shares over to the next auction. 
  • A food bank in an area with a low level of local donations can focus its bidding on loads that have a  high nutritional value and calorie count, but aren't as attractive to other food banks. 
  • In a few cases, no food bank really wants a certain donation, but it's important not to upset potential donors, so food banks can bid negative shares--that is, they can receive shares in exchange for picking up that particular load. 
  • Each night, the "shares" that were spent that day are reallocated among all the food banks, using a formula related to the "Goal Factor." 
  • However, if a food bank with lots of local donations--and thus no need to bid--accumulates a certain number of shares, then it doesn't receive any additional shares above that level, on the basis that it clearly doesn't need them. 

Prendergast describes in more detail in his paper how the system has worked in practice, with specific empirical details. But for many economist-readers, the key theme here will be the interaction of local information, incentives, and bidding, working together as a mechanism for efficient allocation. It may not be possible to have central authority with better altruistic intentions than Feeding America. But when it comes to allocating scarce resources, the decentralized market-like mechanism performs considerably better. 

Thursday, November 2, 2017

Boston, the 2024 Olympics, and the Power of Economics

On January 8, 2015, the US Olympic committee chose the city of Boston from among four finalists to be the US city that would compete for the right to host the 2024 Summer Olympic games. By July, the USOC had retracted the invitation. What happened? Andrew Zimbalist, who had a ringside seat for the controversy from his position at Smith College as well as a professional interest as a researcher in sports economics, tells the story in "Boston Takes a Pass on the Olympic Torch: Scholarly research does sometimes have a positive effect on public policy," which appears in the Fall 2017 issue of Regulation magazine (pp. 28-33)

Part of the issue was a lack of transparency so complete that it blended into outright disinformation. For example, a group called Boston 2024 had submitted Boston's proposal to the USOC, but the proposal was not publicly released. The mayor of Boston, without a vote of the city council or a public debate,  signed a "joinder agreement" that committed the city to accept all terms of the US Olympic Committee and the International Olympic Committee if the city was chosen.

As the details came out, they weren't pretty. As Zimbalist reports:
"One such term [of the joinder agreement] was that the city would provide a financial guarantee to cover any deficits in the event of a cost overrun or revenue shortfall. ... The 2012 Games in London alone had a nearly threefold overrun, with a final cost in excess of $18 billion. Given that background and the fact that the entire Boston city budget was only $2.7 billion, it was not a trivial matter that Walsh had signed this agreement."
Other elements of the plan turned out to include a gag rule: "The City, including its employees, officers, and representatives, shall not make, publish, or communicate to any Person, or communicate in any public forum, any comments or statements (written or oral) that reflect unfavorably upon, denigrate or disparage, or are detrimental to the reputation or stature of, the ICO, the IPC, the USOC, the IOC Bid, the Bid Committee, or the Olympic or Paralympic movement. ..."

Other requirements turned out to involve tax breaks, shutting down the Boston Common, and more:
"Another IOC requirement was that the city clear all its public billboards so they would be available for IOC marks as well as those of IOC sponsors. Still another requirement was that all activities connected to constructing the Olympic venues and infrastructure, the sale of tickets, and income to the athletes would be tax-exempt. ... 
"The initial plan called for constructing the beach volleyball venue in the middle of Boston Common. While this might have produced nice images for international television, it was viewed as heresy in Boston. The Common is enjoyed by thousands of Bostonians every day for strolls and recreation. To make room for the beach volleyball facility, dozens of trees would have to be felled and months of pre- and post-Games disruption would render the Common unusable. The bid also called for $5.2 billion in public transportation infrastructure investment. Bid supporters claimed that those investments were already planned and funded. It turned out, however, that they were little more than unapproved and unfunded conceptual designs. Further, Bill Straus, the co-chair of the state legislature’s transportation committee, said on local television that the actual costs of the projects would exceed $13 billion. ...  The bid identified the Columbia Point area of southeast Boston as the future home of the Olympic Village. The Widett Circle area, south of South Station, would be the location of the Olympic Stadium. Among other problems with these sites, the bid claimed that the existing property owners had been contacted and were on board with the repurposing of their land. Upon learning of the bid’s intentions, the affected landowners stated that they knew nothing about the plans. ...  Further, the bid identified no developers who were interested in building the proposed venues, nor found any community ready to host either the Velodrome or the Aquatic Center, and counted on Harvard and MIT to host various competitions while the schools disavowed any interest in doing so."
But maybe these issues could have been negotiated or worked out, if the financial picture for a city hosting the Summer Games has not been so generally grim. As Zimbalist reports:
"[T]he typical host of the Summer Games experiences costs on the order of $15–20 billion, yet receives only $3–5 billion in revenue—not a very salubrious financial balance. The IOC propaganda machine will claim that any short-term financial losses will be offset by long-term gains. Most notably, the host city will be put on the world map, occasioning growth in tourism, trade, and foreign investment. Those are nice thoughts, but there is little evidence from academic research that they ever materialize.
"First, most Olympic host cities are already on the world map. People and businesses that have the resources and interest to travel internationally already know about the city and its allurements. Second, Olympic hosts often experience a decrease in tourism during the Games as travelers stay away from the congestion, inconvenience, high prices, and security issues. Hotel occupancy may drop even more because most cities expand lodging capacity appreciably in anticipation of an elusive tourism bonanza. Third, the tourists who do attend the Games return home and tell their friends, neighbors, and relatives about the exciting 100-yard dash or swimming relay they watched; they rarely tell stories about the cultural or culinary attractions of the host city. Thus, tourism loses its most effective propagator: word of mouth. Fourth, exposure on the world stage does not necessarily burnish a city’s image; instead, it may tarnish it—just ask Mexico 1968, Munich 1972, Montreal 1976, Athens 2004, Sochi 2014, and Rio 2016.
"The long-term effects, in fact, may well be negative. After spending billions of dollars on Olympic-related construction, the host city then faces the challenge of what to do with the venues after the Olympics leave town."
As the cost and revenue estimates of the Boston 2024 organizing effort came out, it seemed clear that the costs were systematically underestimated and revenues systematically overestimated.

The Olympic Games are often viewed as an honor. Thus, cities (and countries) lined up to participate. But you don't pay the bills with honor, and the research of Zimbalist and others has documented that large shares of the cost of the Games have usually fallen to taxpayers. For more discussion, see  "The Economics of Hosting the Olympics" (May 13, 2016).

One can imagine an alternative model of the Summer Olympics, in which the competition between cities was over how to  hold the Games at the lowest cost, using existing facilities as much as possible. Maybe (and I know this is crazy talk) the focus could even shift from promotionalism to the actual athletes and events. The 2024 Games will be held in Paris, and an alternative US city, Los Angeles, is on the docket for the 2028 Games. It will be interesting to see if they can negotiate the process in a way that holds down the typical losses. 

Wednesday, November 1, 2017

Fall 2017 Journal of Economic Perspectives On-line

I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon was launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided--to my delight--that it would be freely available on-line, from the current issue back to the first issue. Here, I'll start with Table of Contents for the just-released Fall 2017 issue, which in the Taylor household is known as issue #122. Below that are abstracts and direct links for all of the papers. I will blog more specifically about some of the papers in the next week or two, as well.
___________________

Symposium: Health Insurance and Choice

"Delivering Public Health Insurance through Private Plan Choice in the United States," by Jonathan Gruber
The United States has seen a sea change in the way that publicly financed health insurance coverage is provided to low-income, elderly, and disabled enrollees. When programs such as Medicare and Medicaid were introduced in the 1960s, the government directly reimbursed medical providers for the care that they provided, through a classic "single payer system." Since the mid-1980s, however, there has been an evolution towards a model where the government subsidizes enrollees who choose among privately provided insurance options. In the United States, privatized delivery of public health insurance appears to be here to stay, with debates now focused on how much to expand its reach. Yet such privatized delivery raises a variety of thorny issues. Will choice among private insurance options lead to adverse selection and market failures in privatized insurance markets? Can individuals choose appropriately over a wide range of expensive and confusing plan options? Will a privatized approach deliver the promised increases in delivery efficiency claimed by advocates? What policy mechanisms have been used, or might be used, to address these issues? A growing literature in health economics has begun to make headway on these questions. In this essay, I discuss that literature and the lessons for both economics more generally and health care policymakers more specifically.
Full-Text Access | Supplementary Materials


"Selection in Health Insurance Markets and Its Policy Remedies," by Michael Geruso and Timothy J. Layton
Selection (adverse or advantageous) is the central problem that inhibits the smooth, efficient functioning of competitive health insurance markets. Even—and perhaps especially—when consumers are well-informed decision makers and insurance markets are highly competitive and offer choice, such markets may function inefficiently due to risk selection. Selection can cause markets to unravel with skyrocketing premiums and can cause consumers to be under- or overinsured. In its simplest form, adverse selection arises due to the tendency of those who expect to incur high health care costs in the future to be the most motivated purchasers. The costlier enrollees are more likely to become insured rather than to remain uninsured, and conditional on having health insurance, the costlier enrollees sort themselves to the more generous plans in the choice set. These dual problems represent the primary concerns for policymakers designing regulations for health insurance markets. In this essay, we review the theory and evidence concerning selection in competitive health insurance markets and discuss the common policy tools used to address the problems it creates. We emphasize the two markets that seem especially likely to be targets of reform in the short and medium term: Medicare Advantage (the private plan option available under Medicare) and the state-level individual insurance markets.
Full-Text Access | Supplementary Materials

"The Questionable Value of Having a Choice of Levels of Health Insurance Coverage," by Keith Marzilli Ericson and Justin Sydnor
In most health insurance markets in the United States, consumers have substantial choice about their health insurance plan. However additional choice is not an unmixed blessing as it creates challenges related to both consumer confusion and adverse selection. There is mounting evidence that many people have difficulty understanding the value of insurance coverage, like evaluating the relative benefits of lower premiums versus lower deductibles. Also, in most US health insurance markets, people cannot be charged different prices for insurance based on their individual level of health risk. This creates the potential for well-known problems of adverse selection because people will often base the level of health insurance coverage they choose partly on their health status. In this essay, we examine how the forces of consumer confusion and adverse selection interact with each other and with market institutions to affect how valuable it is to have multiple levels of health insurance coverage available in the market.
Full-Text Access | Supplementary Materials

Symposium: From Experiments to Economic Policy

"From Proof of Concept to Scalable Policies: Challenges and Solutions, with an Application," by Abhijit Banerjee, Rukmini Banerji, James Berry, Esther Duflo, Harini Kannan, Shobhini Mukerji, Marc Shotland and Michael Walton
The promise of randomized controlled trials is that evidence gathered through the evaluation of a specific program helps us—possibly after several rounds of fine-tuning and multiple replications in different contexts—to inform policy. However, critics have pointed out that a potential constraint in this agenda is that results from small "proof-of-concept" studies run by nongovernment organizations may not apply to policies that can be implemented by governments on a large scale. After discussing the potential issues, this paper describes the journey from the original concept to the design and evaluation of scalable policy. We do so by evaluating a series of strategies that aim to integrate the nongovernment organization Pratham's "Teaching at the Right Level" methodology into elementary schools in India. The methodology consists of reorganizing instruction based on children's actual learning levels, rather than on a prescribed syllabus, and has previously been shown to be very effective when properly implemented. We present evidence from randomized controlled trials involving some designs that failed to produce impacts within the regular schooling system but still helped shape subsequent versions of the program. As a result of this process, two versions of the programs were developed that successfully raised children's learning levels using scalable models in government schools. We use this example to draw general lessons about using randomized control trials to design scalable policies.
Full-Text Access | Supplementary Materials


"Experimentation at Scale," by Karthik Muralidharan and Paul Niehaus
This paper makes the case for greater use of randomized experiments "at scale." We review various critiques of experimental program evaluation in developing countries, and discuss how experimenting at scale along three specific dimensions—the size of the sampling frame, the number of units treated, and the size of the unit of randomization—can help alleviate the concerns raised. We find that program-evaluation randomized controlled trials published over the last 15 years have typically been "small" in these senses, but also identify a number of examples—including from our own work—demonstrating that experimentation at much larger scales is both feasible and valuable.
Full-Text Access | Supplementary Materials

"Scaling for Economists: Lessons from the Non-Adherence Problem in the Medical Literature," by Omar Al-Ubaydli, John A. List, Danielle LoRe and Dana Suskind
Economists often conduct experiments that demonstrate the benefits to individuals of modifying their behavior, such as using a new production process at work or investing in energy saving technologies. A common occurrence is for the success of the intervention in these small-scale studies to diminish substantially when applied at a larger scale, severely undermining the optimism advertised in the original research studies. One key contributor to the lack of general success is that the change that has been demonstrated to be beneficial is not adopted to the extent that would be optimal. This problem is isomorphic to the problem of patient non-adherence to medications that are known to be effective. The large medical literature on countermeasures furnishes economists with potential remedies to this manifestation of the scaling problem.
Full-Text Access | Supplementary Materials

Articles

"How Food Banks Use Markets to Feed the Poor," by Canice Prendergast
A difficult issue for organizations is how to assign valuable resources across competing opportunities. This work describes how Feeding America allocates about 300 million pounds of food a year to over two hundred food banks across the United States. It does so in an unusual way: in 2005, it switched from a centralized queuing system, where food banks would wait their turn, to a market-based mechanism where they bid daily on truckloads of food using a "fake" currency called shares. The change and its impact are described here, showing how the market system allowed food banks to sort based on their preferences.
Full-Text Access | Supplementary Materials

"Brexit: The Economics of International Disintegration," by Thomas Sampson
On June 23, 2016, the United Kingdom held a referendum on its membership in the European Union. Although most of Britain's establishment backed remaining in the European Union, 52 percent of voters disagreed and handed a surprise victory to the "leave" campaign. Brexit, as the act of Britain exiting the EU has become known, is likely to occur in early 2019. This article discusses the economic consequences of Brexit and the lessons of Brexit for the future of European and global integration. I start by describing the options for post-Brexit relations between the United Kingdom and the European Union and then review studies of the likely economic effects of Brexit. The main conclusion of this literature is that Brexit will make the United Kingdom poorer than it would otherwise have been because it will lead to new barriers to trade and migration between the UK and the European Union. There is considerable uncertainty over how large the costs of Brexit will be, with plausible estimates ranging between 1 and 10 percent of UK per capita income. The costs will be lower if Britain stays in the European Single Market following Brexit. Empirical estimates that incorporate the effects of trade barriers on foreign direct investment and productivity find costs 2–3 times larger than estimates obtained from quantitative trade models that hold technologies fixed.
Full-Text Access | Supplementary Materials


"Enrollment without Learning: Teacher Effort, Knowledge, and Skill in Primary Schools in Africa," by Tessa Bold, Deon Filmer, Gayle Martin, Ezequiel Molina, Brian Stacy, Christophe Rockmore, Jakob Svensson and Waly Wane
School enrollment has universally increased over the last 25 years in low-income countries. Enrolling in school, however, does not assure that children learn. A large share of children in low-income countries complete their primary education lacking even basic reading, writing, and arithmetic skills. Teacher quality is a key determinant of student learning, but not much is known about teacher quality in low-income countries. This paper discusses an ongoing research program intended to help fill this void. We use data collected through direct observations, unannounced visits, and tests from primary schools in seven sub-Saharan African countries to answer three questions: How much do teachers teach? What do teachers know? How well do teachers teach?
Full-Text Access | Supplementary Materials


"Population Control Policies and Fertility Convergence," by Tiloka de Silva and Silvana Tenreyro
Rapid population growth in developing countries in the middle of the 20th century led to fears of a population explosion and motivated the inception of what effectively became a global population- control program. The initiative, propelled in its beginnings by intellectual elites in the United States, Sweden, and some developing countries, mobilized resources to enact policies aimed at reducing fertility by widening contraception provision and changing family-size norms. In the following five decades, fertility rates fell dramatically, with a majority of countries converging to a fertility rate just above two children per woman, despite large cross-country differences in economic variables such as GDP per capita, education levels, urbanization, and female labor force participation. The fast decline in fertility rates in developing economies stands in sharp contrast with the gradual decline experienced earlier by more mature economies. In this paper, we argue that population-control policies likely played a central role in the global decline in fertility rates in recent decades and can explain some patterns of that fertility decline that are not well accounted for by other socioeconomic factors.
Full-Text Access | Supplementary Materials


"Recommendations for Further Reading," by Timothy Taylor
Full-Text Access | Supplementary Materials