Thursday, February 28, 2013

Can Africa's Energy Growth Be Green?

At least as measured by emissions of carbon dioxide, "Africa is the green continent," as Paul Collier and Anthony Venables note in the most recent issue of the World Economic Review. Of course, the reason is that the standard of living across Africa is so low that not much energy is being consumed. As the economies of Africa develop, can its energy demand be green?

The common relationship between economic growth and environmental pollution is sometimes called the "environmental Kuznets curve." It's an inverted-U; that is, economic development first brings a rise in pollution, but then later leads to a reduction in pollution. Much of the underlying reason involves political tradeoffs: the very poor are more willing to sacrifice environmental protection for gains in consumption, while those who are better off become less willing to do so. For a review of these arguments in my own Journal of Economic Perspectives from back in 2002, see "Confronting the Environmental Kuznets Curve" by Susmita Dasgupta, Benoit Laplante, Hua Wang and David Wheeler.( Like all articles in JEP back to the first issue in 1987, it is freely available on-line courtesy of the American Economic Association.) Here's a figure from Collier and Venables, showing production of carbon dioxide relative to economic output as measured by GDP.


The hope that Africa might be able to minimize the rise or even sidestep the rise in pollution that often comes with technological development is rooted in several underlying facts. Africa has strong natural potential for use of some renewable energy resources, like solar power. In addition, Africa has what economists have long referred to as "the advantages of backwardness" (the phrase comes from the writings of Alexander Gerschenkron back in 1962, available at various places on the web like here ). The notion is that countries which start out behind may be able to catch up rapidly because they can draw on technologies already developed elsewhere. In some cases, they may even be able to leapfrog certain stages of techology; for example, many areas of Africa may move directly to mobile phones rather than land lines for all and, for example, to retail banking based on these phones, rather than following the historical path of phones and banking from high-income countries.

Could Africa also use modern technologies for energy conservation and alternative sources of energy to sidestep the environmental Kuznets curve? Collier and Venables pose this question in "How Rapidly Should Africa Go Green? The Tension Between Natural Abundance and Economic Scarcity." The essay is a nice readable version of a more technical research paper that they published last year in Energy Economics--"Greening Africa? Technologies, Endowments and the latecomer effect"--which is available as a working paper here.  Their conclusion is not optimistic: 

"Superficially, Africa appears well-suited for green energy. Sunshine, water, land, forests, and being a latecomer all confer significant advantages. However, energy generation, energy saving, and carbon capture are intensive in capital, governance capacity and skills. Unfortunately, all of these factors are scarce in Africa. These factor scarcities offset the advantages conferred by natural endowments and are often decisive. Similarly, the historic advantage of being a latecomer to the installation of generating capacity is offset by the historic disadvantage of the acute energy scarcity inherited from past under-investment: Africa cannot afford to wait for further developments in green technologies. Nevertheless, there is scope for Africa’s natural advantages for green energy to be harnessed to a global advantage. But to do so will require international action that brings global factor endowments to bear on Africa’s natural opportunities."

What sort of international action would be especially useful? They emphasize three possibilities: 1)
"It is cheaper for the international community to pay for the installation of green technology in Africa’s new plants than to retrofit it in existing Northern plants;" 2) " A second Africa-specific opportunity in generation is for international public finance, perhaps through guarantees, to subsidize the cost of switching from gas flaring to either LNG or gas-fired electricity generation;" 3) "A third would be to provide international public subsidies or guarantees for hydropower mega-projects."

For an overview of the scale of this issue, a useful starting point is a 2011 World Bank report by
Anton Eberhard, Orvika Rosnes, Maria Shkaratan, and Haakon Vennemo called "Africa’s Power Infrastructure:Investment, Integration, Efficiency." The report has all sorts of useful detail on the potential for different kinds of power generation, but here's the big-picture overview of where sub-Saharan Africa stands on power generation and what is needed (with citations and references to figures omitted). 

"The combined power generation capacity of the 48 countries of Sub-Saharan Africa is 68 gigawatts (GW)—no more than that of Spain. Excluding South Africa, the total falls to 28 GW, equivalent to the installed capacity of Argentina (data for 2005 ). Moreover, as much as 25 percent of installed capacity is not operational for various reasons, including aging plants and lack of maintenance. The installed capacity per capita in Sub-Saharan Africa (excluding South Africa) is a little more than one-third of South Asia’s (the tworegions were equal in 1980) and about one-tenth of that of Latin America. Capacity growth has been largely stagnant during the past three decades ...

"We assume that over a 10-year period the continent should be expected to redress its infrastructure backlog, keep pace with the demands of economic growth, and attain a number of key social targets for broader infrastructure access....  Installed capacity will need to grow by more than 10 percent annually—or more than 7,000 megawatts (MW) a year—just to meet Africa’s suppressed demand, keep pace with projected economic growth, and provide additional capacity to support efforts to expand electrification. ... Based on these assumptions, the overall costs for the power sector between 2005 and 2015 in Sub-Saharan Africa are a staggering $41 billion a year—$27 billion for investment and $14 billion for operations and maintenance."
The task of increasing energy production in Africa is enormous: roughly speaking, the World Bank estimates mean a doubling of annual infrastructure spending. The potential economic gains of improved power infrastructure to countries in Africa, and thus to hundreds of millions of the poorest people in the world, are also enormous: the World Bank economists cite estimates that economic growth might increase by 2-3 percentage points per year. But the environmental consequences of this increase could also be substantial, and so the policies that seek to promote growth of energy production in Africa also need to be designed to make it green. An environmental Kuznets curve is likely to arise--but with an effect, its peak can be flattened.

Wednesday, February 27, 2013

MInimum Wage and the Law of Many Margins

Last November, I pointed out that President Obama had campaigned in 2008 on a pledge to raise the minimum wage, but that this proposal had vanished during the rest of his first term. Now, after the election, Obama somewhat unexpectedly resurrected the proposal in his State of the Union address. For a review of the controversy over the economics of the minimum wage, a useful starting point is
"Why Does the Minimum Wage Have No Discernible Effect on Employment?" written by John Schmitt for the Center for Economic and Policy Research.

While Schmitt's title suggests, albeit in the form of a question, that it is an agreed-upon truth that the minimum wage has "no discernible effect on employment," I would say that his own review of the evidence suggests that there is still a genuine controversy between those who see the employment effects of the minimum wage as nil and those who see it as small. As Schmitt writes in the conclusion: "[W]hat is striking about the preceding review of possible channels of adjustment – including employment – is how often the weight of the empirical evidence is either inconclusive (statistically insignificant or positive in some cases and negative in others) or suggestive of only small economic effects."

There is a difficult problem of inferring causality here. Compared to the overall costs of firms, or even compared to the costs of low-wage labor, the effects of a slightly higher minimum wage are going to be hard to distinguish from everything else that's happening in the economy. The employment prospects for low-skilled workers have been falling for decades, and it would clearly be incorrect to blame that on the minimum wage. Rises in the minimum wage are more likely to occur when the economy is doing well and adding jobs, but it would clearly be incorrect to infer from this correlation that a higher minimum wage causes an increase in jobs. In addition, there are difficult questions of what is sometimes called "publication bias" in the minimum wage literature, in which researchers of different political bents may--surprise, surprise--tend to publish the results that confirm their pre-existing beliefs.

Rather than try to unpick this empirical puzzle here--for those who are interested, Schmitt provides a nice overview of the key paper and their methods--I'd like to focus on a separate issue, which I call the Law of Many Margins. The "law" simply points out that when a rule is imposed, like a minimum wage, there are almost always a wide variety of possible reactions to that law. Schmitt provides a list of 11 possible reactions (!) to a higher minimum wage. They are:

  1. Reduction in hours worked (because firms faced with a higher minimum wage trim back on the hours they want)
  2. Reduction in non-wage benefits (to offset the higher costs of the minimum wage)
  3. Reduction in money spent on training (again, to offset the higher costs of the minimum wage)
  4. Change in composition of the workforce (that is, hiring additional workers with middle or higher skill levels, and fewer of those minimum wage workers with lower skill levels)
  5. Higher prices (passing the cost of the higher minimum wage on to consumers)
  6. Improvements in efficient use of labor (in a model where employers are not always at the peak level of efficiency, a higher cost of labor might give them a push to be more efficient)
  7. "Efficiency wage" responses from workers (when workers are paid more, they have a greater incentive to keep their jobs, and thus may work harder and shirk less)
  8. Wage compression (minimum wage workers get more, but those above them on the wage scale may not get as much as they otherwise would)
  9. Reduction in profits (higher costs of minimum wage workers reduces profits)
  10. Increase in demand (a higher minimum wage boosts buying power in overall economy)
  11. Reduced turnover (a higher minimum wage makes a stronger bond between employer and workers, and gives employers more reason to train and hold on to workers)

The evidence on many of these  points is ambiguous at best, and indeed may vary across industries or geographic areas or employers. But it's worth noting that which of these effects arise, and with what magnitude, can only be settled by empirical evidence, not theoretical assertions.

I confess that I find it hard to get too excited about modest increases in the federal minimum wage every few years, which has been happening for decades. As Schmitt points out, the evidence is that this pattern of minimum wage increases has had at most a small effect on employment and other outcomes. But the minimum wage was $5.15/hour in 2007, when President Bush signed legislation to raise it to $7.25/hour by 2009. Given an unemployment rate that has been stuck near or above 8% for four solid years now, my preference would be to de-emphasize rises in the minimum wage for awhile longer--and instead focus on other methods to help the working poor.

Tuesday, February 26, 2013

Clean Water: Next Steps?

 The Clean Water Act of 1972 regarded water pollution a something that came out of a pipe--typically from either an industrial facility or a sewage treatment plant--and passed into streams, rivers, lakes, or ocean. Thus, the legislation was based on a process of issuing permits for what could come out of these pipes, and on phasing back those pollutants. But the success of the Clean Water Act in reducing these "point-source" discharges means that the primary source of U.S. water pollution is "nonpoint" pollution--that is, runoff from agricultural and urban areas.

Karen Fisher-Vanden and Sheila Olmstead set the stage for one way in which environmental regulators are trying to tackle the issue of nonpoint source pollution in their article, "Moving Pollution Trading from Air to Water: Potential, Problems, and Prognosis," which appears in the most recent (Winter 2013) issue of my own Journal of Economic Perspectives. Like all articles in JEP back to the first issue in 1987, it is freely available on-line courtesy of the American Economic Association. Fisher-Vanden and Olmstead write (citations and footnotes omitted):


 "The stated goals of the Clean Water Act were: 1) the attainment of fishable and swimmable waters by July 1, 1983; and 2) the elimination of all discharges of pollutants into navigable waters by 1985. Obviously, those deadlines have been postponed through amendments, and distinctions have since been made between different types of pollutants. ...  The Clean Water Act’s main tool is a set of effluent standards, implemented through point-source permitting. The National Pollutant Discharge Elimination System (NPDES) specifies quantitative effluent limits by pollutant, for each point source, based on available control technologies. For the most part, industrial point source compliance with these permits has been high. Municipal sewage treatment has also expanded dramatically, resulting in impressive improvements in urban water quality—for examples, see Boston Harbor and the Hudson River near New York City.

"But the gains from point source controls are reaching their limits. Even if all point sources were to achieve zero discharge, only 10 percent of US river and stream miles would rise one step or more on EPA’s water quality ladder. Nonpoint source pollution such as agricultural and urban runoff, atmospheric deposition, and runoff from forests and mines has become the major concern of water pollution abatement efforts. In fact, nonpoint source pollution from agricultural activities is now the primary source of impairment in US rivers and streams. Nonpoint source pollution involving nutrients like nitrogen and phosphorus causes excessive aquatic vegetation and algae growth and eventual decomposition, which deprives deeper waters of oxygen, creating hypoxic or “dead” zones, fish kills, and other damages. This problem is geographically widespread; seasonal dead zones in US coastal waters affect Puget Sound, the Gulf of Mexico, the Chesapeake Bay, and Long Island Sound. However, agricultural nonpoint source pollution is essentially unregulated by the Clean Water Act ..."


Before discussing what efforts are being made to address nonpoint source pollution, it's worth nothing that there are legitimate questions about whether the costs of the Clean Water Act have exceeded the benefits.  For example, in the Winter 2002 issue of my own Journal of Economic Perspectives, A. Myrick Freeman III reviewed studies bearing on "Environmental Policy Since Earth Day I:What Have We Gained?" Even if one goes beyond just looking at immediate economic gains and takes into account survey evidence on people's willingness to pay for knowing that water is cleaner (so-called "contingent valuation" evidence), the overall costs seem to far outstrip the benefits.

The intuition behind this result is that there were some prominent bodies of water that were highly contaminated, and that have improved substantially since the passage of the law. But the law was not just applied to a few high-profile cases of water pollution: it imposed costs everywhere. Moreover, as noted above, the law called for (eventually) the total elimination of all discharges, and even a passing acquaintance with the law of diminishing returns suggests that reducing pollution by one-third or one-half or more might be done at fairly low cost, but when it comes to figuring out how to reduce that last bit of pollution, the marginal costs may climb very high. .

But the question of past costs and benefits of the Clean Water Act is, well, water under the bridge. At present, the situation is that the law has been so effective at reducing point-source emissions that the main source of water pollution is nonpoint sources, and especially runoff from agriculture. There are a variety of voluntary programs to encourage reducing nonpoint source pollution. But such programs generally lack teeth. Thus, environmental regulators have been trying in some areas to tackle the problem through a back door--by creating a structure for tradeable emissions permits. As Fisher-Vanden and Olmstead describe it, the current clean water law

"...  requires states to establish a Total Maximum Daily Load (TMDL)—basically a “pollution budget”—for each water body that does not meet ambient water quality standards for its designated use, despite point source controls. Designated uses include recreational use, public water supply, and industrial water supply, and each designated use has an applicable water quality standard. State courts began ordering the developmentof TMDLs in the 1980s and 1990s in response to lawsuits by environmental groups.Since 1996, the states in cooperation with the Environmental Protection Agency have completed thousands of TMDLs. Establishing a TMDL is a “holistic accounting exercise” in which all permitted sources and land uses within a watershed drainage area, including agriculture and urban runoff, are inventoried and allocated responsibility for portions of the pollution budget. While regulators cannot implement enforceable caps on agricultural pollution through this process, they have recognized the importance of incorporating agricultural abatement into clean-up processes, and water quality trading is one tool they have employed for this purpose."

They discuss a number of examples. In one fairly straightforward program here in Minnesota, the "Southern Minnesota Beet Sugar Cooperative, a beet processor, pays its 256 grower-members to invest in phosphorus-reducing land management changes so that the processor can meet its permit requirements for expanded production. In this case, the beet growers and the processing facility are treated under the processor’s permit as a single source to meet an overarching phosphorus effluent cap." A more complicate case involves the Chesapeake Bay, which receives discharges from six states and the District of Columbia, and in which three of the states are allowing for trading of water quality permits. Fisher-Vanden and Olmstead discuss several dozen of these programs around the country.

The practical and political advantage of using marketable permits are well-known among economists, and are a staple of most intro econ classes: specifically, those who need to reduce emissions can think about whether to do it themselves, or whether to pay some other economic actor--like a farm--for reducing emissions. With this choice, emissions get reduced, which after all is the goal, at lowest possible cost. But the practical problems of implementing such a scheme over a large area like the Chesapeake Bay, making sure that reductions in nonpoint source pollution really happen, and in a way that doesn't clean up one area of the Bay at the expense of another area, can be quite complex. My own sense is that the Total Maximum Daily Load concept is a very useful one for thinking about the causes of water pollution, but it's time to stop putting all the requirements on the point-source emitters of water pollution. For bodies of water that are not meeting ambient quality standards, there should be requirements for both point and nonpoint emitters to reduce their water pollution--with trading of emissions permits allowed between them.

Monday, February 25, 2013

Trends in End-Of-Life Care

When talking about ways of curbing health care spending, someone always brings up the costs of acute care at the very end of life. Could we save significant money by not spending so much on people who are the verge of dying? To what extent are we already changing the patterns of end-of-life care?

We spend about 25-30% of Medicare spending on patients who are in their last year of life, according to Gerald F. Riley and James D. Lubitz in their 2010 study, "Long-term trends in Medicare payments in the last year of life" (Health Services Research, April 2010, 45(2):565-76). They also find that this number hasn't changed much over the last 30 years--that is, health care spending during the last year of life is rising at about the same pace as other Medicare spending-- and that the percentage isn't much affected by adjusting for changes in age or gender of the elderly.  On their estimates, Medicare spending on those who die in a given year is much higher than on those who survive the year: in 2006, Medicare spending in 2006 on those who died in that year was $38,975, while Medicare spending in 2006 on those who survived the year was $5,993.


Total Medicare spending in 2012 was about $560 billion. Thus, 25% of that amount would be $140 billion spent during the last year of life. It's often unclear at the time whether someone is actually in their last year of life, but say for the sake of argument that such cases could be identified, and spending in this area could be reduced by half. If attainable, cuts of this size would be $70 billion in annual savings, which is certainly a substantial sum. But to keep it in perspective, total U.S. health care spending is in the neighborhood of $2.6 trillion. Thus, the potential gains from even fairly aggressive limits on end-of-life health care spending through Medicare is a little under 3% of total U.S. healthcare spending.

To what extent is the U.S. health care system changing its practices in end-of-life care? In the February 6, 2013 issue of JAMA, a team of writers led by Joan Temo address this question in an article called: "Change in End-of-Life Care for Medicare Beneficiaries" (vol. 309, #5, pp. 470-477). They find an intriguingly mixed set of patterns.

One common measure of end-of-life care is to see what share of patients died in a hospice or at home, compared to dying in the acute-care ward of a a hospital. Their results show that from 2000 to 2009, the share of patients who died in the acute care section of a hospital declined from 32.6% to 24.6%; the share of patients who died at home rose from  30-7% to 33.5%; and the share of patients who died in a hospice rose dramatically from 21.6% to 42.2%. On the surface, these kinds of numbers certainly suggest a pattern of less aggressive end-of-life care.

But when Temo et al. dug just a bit deeper, they found that many of the hospice stays were extremely short--just a few days. Looking at the use of intensive care units in the last month of life, they found that it has risen from 24.3% in 2000 to 29.2% in 2009. In addition, the number of health care "transitions" from one care setting to another has risen both in the last 90 days of life and the last three days of life. For example, 10.3% of patients had a "transition" in the last three days of life in 2000, while 14.2% of patients had a transition in the last three days of life in 2009.

As Temo et al. put it: "Although a hospice stay of 1 day may be viewed as beneficial by a dying patient and family, an important yet unanswered research question is whether this pattern of care is consistent with patient preferences and improved quality of life. ... Our findings of an increase in the number of short hospice stays following a hospitalization, often involving an ICU stay, suggest that increasing hospice use may not lead to a reduction in resource utilization. Short hospice lengths of stay raise concerns that hospice is an "add-on" to a growing pattern of more utilization of intensive care services at the end of life."

Few questions in health care policy are harder than what should be spent on end-of-life care. It's fairly common for the elderly, when healthy, to say that they don't want extreme end-of-life measures. But when those same people become very ill, both they and their families often start thinking that extreme care makes a lot of sense. In addition, while perhaps the diagnostic and statistical techniques for figuring out a few months or a year in advance who is likely to die will improve over over time, right now they are not very accurate. Thus, the common sense policies in this area tend to revolve around earlier counseling for the elderly, so that patients (and their families) can have a more clear sense of what they want in terms of end of life care, and improving hospice and end-of-life home care--after all, basic palliative services like intravenous fluids and antibiotics don't need to happen in a hospital setting.

Friday, February 22, 2013

The Financial Cycle: Theory and Implications

In the aftermath of the Great Recession, mainstream macroeconomists have been seeking in various ways to bring the financial sector into their models. As that activity implies, the financial sector had not previously been playing much of a role mainstream models. Claudio Borio lays out a perspective on treating cycles in the financial sector as having a life of their own in "The financial cycle andmacroeconomics: What have we learnt?", published in December 2012  as working paper #395 for the Bank of International Settlements. I should note that Borio's view of how the financial sector interrelates with the real economy is not conventional macroeconomic wisdom, but I should also note that conventional macroeconomics hasn't exactly covered itself with glory in the last few years.


Borio begins by point out that conventional macroeconomics was paying little attention to the financial sector in the years before the Great Recession, and argues that the strategies for trying to add a  financial sector to existing models doesn't go nearly far enough. (Citations and footnote are omitted from quotations throughout.)  Here's Borio: "The financial crisis that engulfed mature economies in the late 2000s has prompted much soul searching. Economists are now trying hard to
incorporate financial factors into standard macroeconomic models. However, the prevailing,
in fact almost exclusive, strategy is a conservative one. It is to graft additional so-called
financial “frictions” on otherwise fully well behaved equilibrium macroeconomic models ... The main thesis is that macroeconomics without the financial cycle is like Hamlet without the Prince. In the environment that has prevailed for at least three decades now, just as in the one that prevailed in the pre-WW2 years, it is simply not possible to understand business fluctuations and their policy challenges without understanding the financial cycle."

Borio argues that there is a "financial cycle" with its own dynamics. Here's a figure with U.S. data showing the regular business cycle, measured by variations in GDP, compared with the "financial cycle," based on estimates of credit, the credit/GDP ratio, and property prices. He argues that while business cycles are usually in the range of 1-8 years, "the average length of the financial cycle in a sample of seven industrialised countries since the 1960s has been around 16 years."

Borio argues that the peaks of the financial cycle are associated with financial crises. When a business cycle recession happens at the same time as the contraction part of a financial cycle, the recession is about 50% deeper.

This perspective on the financial cycle also offers some policy advice. Central banks and financial regulators should pay attention to credit/GDP ratios and to property prices. Borio writes: "The idea is to build up buffers in good times, as financial vulnerabilities grow, so as to be able to draw them down in bad times, as financial stress materialises. There are many ways of doing so, through the appropriate design of tools such as capital and liquidity standards, provisioning, collateral and margining practices, and so on. ... In the case of monetary policy, it is necessary to adopt strategies that allow central banks to tighten so as to lean against the build-up of financial imbalances even if near-term inflation remains subdued – what might be called the “lean option”. Operationally, this calls for extending policy horizons beyond the roughly 2-year ones typical of inflation targeting regimes and for giving greater prominence to the balance of risks in the outlook, fully taking into account the slow build-up of vulnerabilities associated with the financial cycle. ... In the case of fiscal policy, there is a need for extra prudence during economic expansions associated with financial booms. ...  Financial booms are especially generous for the public coffers, because of the structure of revenues. And the sovereign inadvertently accumulates contingent liabilities, which crystallise as the
boom turns to bust and balance sheet problems emerge, especially in the financial sector."

However, once the double-whammy of a financial crisis and a business cycle recession has hit simultaneously, Borio also argues that conventional policy responses may not work well. In a "balance sheet recession," fiscal and monetary policy may not be very capable of stimulating demand, and instead may encourage financial firms and businesses to put off the necessary hard steps they need to take, leaving the economy too dependent on government stimulation rather than on the private sector moving forward. As he writes:  "On reflection, the basic reason for the limitations of monetary policy in a financial bust is not hard to find. Monetary policy typically operates by encouraging borrowing, boosting asset prices and risk-taking. But initial conditions already include too much debt, too-high asset prices (property) and too much risk-taking. There is an inevitable tension between how policy works and the direction the economy needs to take."

The concept of a "financial cycle" has a plausible back-story. When times are good, borrowers and investors of all kinds tend to let down their guard, worry less about risks, and gradually become overextended--which can then brings on a counterreaction, or even in some cases a financial crisis. It's easy to point to financial crises, but it's harder to show convincingly that an earlier financial boom is the cause of the crisis. The nice smooth curve of financial cycles above is created by using statistical tools ("filtering") to blend together tBorio is up front about this difficulty and others, and has some suggestions for how the appropriate modeling might proceed. But even if one doesn't buy into the notion of a self-perpetuating financial cycle, standing apart from the regular business cycle, one lesson that everyone seems to have learned from the Great Recession is that rapid expansions of credit and rapid rises in property values have real macroeconomic risks--and thus are an appropriate target for policy.


 

Thursday, February 21, 2013

Rebuilding Unemployment Insurance

In theory,  the federal government sets minimum guidelines for each state's unemployment insurance system, and then each state sets its own rules for what is paid in and and what benefits are offered. Each state has its own unemployment trust fund. The idea is that the the trust fund will build up in good economic times, and then be drawn down in recessions. But it hasn't actually worked that way for a long time, and the problem is getting worse.  Christopher J. O’Leary lays out the issue and possible solutions in "A Changing Federal-State Balance in Unemployment Insurance?" written for the January 2013 Employment Research Newsletter published by the Upjohn Institute.

When a recession hits, the federal government has developed a habit of stepping in with extra unemployment insurance funds. For example, the feds stepped in with additional funding for extending unemployment benefits in 1958, 1961, 1971, 1974, 1982, 1991 and 2002--as well as during the most recent recession. With the feds stepping up, it has been easier and easier for the states to keep their unemployment taxes as low as possible. For example, average unemployment insurance taxes (adjusted for inflation) were $274/employee in 2008, lower than the $350/employee in 1994 and the $515/employee in 1984, according to Ronald Wilus of the U.S. Department of Labor.

As a result, over time the feds are paying for a larger share of unemployment insurance during recessions. Here's an illustrative figure from O'Leary.


For some perspective on the revenues coming into the unemployment trust funds from the regular unemployment tax, as opposed to how much money is going out, here's a table from a Congressional Research Service report on "Unemployment Insurance: Programs and Benefits," by Julie M. Whittaker and Katelin P. Isaacs, dated December 31, 2012. Notice that when unemployment rates were fairly low from 2005-2007, revenue exceeded outlays by about $10 billion per year. Then from 2009, 2010, and 2010, outlays exceeded revenue by something like $100 billion per year. The difference was made up by general taxpayer spending.

The intergovernmental incentives in the unemployment insurance system are clearly messed up. States have an incentive to keep unemployment insurance premiums fairly low, promise significant benefits, and then let the federal government pick up the tab when a recession occurs. What would be needed to get back to a system where states save up funds for unemployment insurance money in trust funds--even if some federal help might occasionally be needed?

One step suggested by O'Leary is to raise the "tax base." At present, the minimum federal standard requires that states collect unemployment insurance taxes on the first $7000 of taxable wages--a level that was established back in 1983. Just adjusting that $7,000 base for inflation would mean increasing it to about $16,000. O'Leary notes that 35 states currently have a taxable wage base at or below $15,000.

A second step would be to have a rule that unemployment insurance benefits would not kick in until after a waiting period. O'Leary writes: "A much neglected potential reform on the benefit side would be to institute waiting periods of 2–4 weeks, with the duration of the wait depending inversely on the aggregate level of unemployment. ... A somewhat longer waiting period will reduce program entry by those with ready reemployment options, and help to preserve the income security strength of the system for those who are involuntarily jobless for 4, 5, or 6 months."

Yet another step would be to use federal rules to discourage states from lowballing the funding of their unemployment insurance and relying on an influx of federal funding. Here's O'Leary: "[T]he federal partner should institute minimum standards on weekly benefit levels and durations, and also tie potential durations of any future federal emergency benefits to the existing state maximum durations. For example, a state providing up to 26 weeks would get 13 weeks of federal temporary benefits, but if the state maximum were 20 weeks the federal supplement would be 10 weeks."

It's worth pointing out that unemployment insurance has a number of problems other than whether it is pre-funded. You need to meet certain qualification tests for unemployment insurance, typically based on earnings in the previous year or so, and as a result, many of the unemployed do not receive unemployment insurance. In January 2013, about 3.5 million people were receiving unemployment insurance benefits, but about 12.3 million people were unemployed.


There are also a number of proposals that seek to adjust the incentives so that unemployment insurance can better co-exist with incentives to find a new job. Some proposals are that unemployment benefits should be larger, so as to soften the economic blow of unemployment, but for a shorter time, to hasten the incentive to find a new job. Some proposals would require or allow people to set up individual unemployment accounts, which they could keep at retirement, so that people would tap their own money before turning to the government fund. One proposal would offer a bonus to those receiving unemployment insurance if they found a job quickly, because it could be less costly for the unemployment insurance trust fund if they find a job faster rather than linger on receiving benefits.

The Great Recession and its aftermath have wrecked the premises of the existing unemployment insurance system. It's time to rebuild.


Wednesday, February 20, 2013

Big Data and Development Applications

"Big data" has become a buzzword. It conveys the notion that our interconnected world is generating a vast array of data--and asks how that data can be used for analysis, social problem-solving, and private profit. However, I had not known that the United Nations has an organization called Global Pulse, which focuses on issues of Big Data from a development perspective. The Global Observatory, a publication of the International Peace Institute, had an interview last November with Robert Kirkpatrick, Director of UN Global Pulse.  Here, I'll quote from the interview with Kirkpatrick, and will also refer to a May 2012 white paper from Global Pulse called "Big Data for Development: Challenges and Opportunities." 

As  a starting point, here's Kirkpatrick defining Big Data: "[Bbig data is a term that has come into vogue only in the last couple of years, and it refers to the tremendous explosion in volume and velocity and variety of digital data that is being produced around the world. The statistics are somewhat astonishing: there was more data produced in 2011 alone than in all of the rest of human history combined back to the invention of the alphabet."

The May 2012 report offers this comment (footnotes and references to figures omitted: "The world is experiencing a data revolution, or “data deluge”. Whereas in previous generations, a relatively small volume of analog data was produced and made available through a limited number of channels, today a massive amount of data is regularly being generated and flowing from various sources, through different channels, every minute in today’s Digital Age. It is the speed and frequency with which data is emitted and transmitted on the one hand, and the rise in the number and variety of sources from which it emanates on the other hand, that jointly constitute the data deluge. The amount of available digital data at the global level grew from 150 exabytes in 2005 to 1200 exabytes in 2010. It is projected to increase by 40% annually in the next few years .. This rate of growth means that the stock of digital data is expected to increase 44 times between 2007 and 2020, doubling every 20 months."

 The flood of data relevant for development issues includes four categories, according to Global Pulse: 1) "Data exhaust" created by people's transactions with digital services, including web searches, purchases, and mobile phone use; 2) "Online information" available in news media and social media, as well as job postings and e-commerce sites; 3) Physical sensors that look at landscapes, traffic patterns, weather, earthquakes, light emissions, and much else; 4) Citizen reporting, when information is submitted by citizens through surveys, hotlines, updating of maps,and the like.

Of course, there are enormous challenges in dealing with Big Data, including privacy concerns, the sheer size of the datasets, how quickly they are expanding, and how to digest and interpret it. But the potential for understanding what is happening much more quickly is becoming apparent. As Kirkpatrick says: "[W]e now live in this hyper-connected world where information moves at the speed of light, and a crisis can be all around the world very, very quickly, but we’re still using two- to three-year-old statistics to make most policy decisions. The irony is, we’re swimming in this ocean of digital data, which is being produced for free all around us."

Private sector firms like Google are already using Big Data. Some of the public sector and research studies include:
  • A country's GDP can be estimated based on light emissions at night, as perceived by satellites. 
  • Outbreaks of flu or cholera or dengue fever can be identified much more quickly by looking at web searches. Another study used Twitter mentions of earthquakes as a way to get a faster response to quakes.
  • One study was able to predict where people were at any time with greater than 90% accuracy based on cell-phone records showing past movements. Another study in developing countries could predict income with 90% accuracy based on how often you top off the air time on your mobile phone. Kirkpatrick says: "Even if you are looking at purely anonymized data on the use of mobile phones, carriers could predict your age to within in some cases plus or minus one year with over 70 percent accuracy. They can predict your gender with between 70 and 80 percent accuracy."
  • A study in Indonesia was able to approximate a consumer price index for basic foods by looking at comments on social media. (Apparently, Jakarta produces more tweets than any other city in the world.) Other studies have sought evidence on food shortages or food price volatility by looking at social media.

I confess that the social scientist within me finds the research possibilities here to be fascinating. Kirkpatrick says:" Now think about this, this is astonishing: the ability to see in real time where beneficiaries are can allow us to understand exactly where the population is that we need to reach, and if you combine that with information on the size of air-time purchases, you can tell how much money these people have. You start to be able to extract basic demographic information, population movement, and behavior data from this information while fully protecting privacy in the process.

What we’re focused on now is working with mobile carriers around the world, including in Indonesia, to get access to archives of anonymized call records and purchase records, because what we do is essentially correlate that data with official statistics. You look at the movement patterns, the mobile service consumption patterns, the social-network patterns that you can derive from how people interact and compare that to food prices, fuel prices, unemployment rates, disease outbreaks, earthquakes, and look at how a population was affected. Or, you compare it to when a program was initiated in the field or when a policy initiative got off the ground: did it actually work? The potential for monitoring and evaluation here as well is quite remarkable."

Moreover, Kirkpatrick describes the effort by Global Pulse to find a middle ground in concerns about privacy and access to Bid Data: "Right now, the conversation around big data is very polarized. You might call it "Germany vs. Mark Zuckerberg." You have the very conservative prohibition against reuse without explicit permission that has become pervasive in the European Union; it’s a very guarded approach. At the opposite end of the spectrum, you have companies that live on big data, which are saying privacy is dead, profit is king. We’re trying to insert a third pole into this debate, which is to say, big data is a raw public good. But to do that we have to create a kind of R & D sandbox where we can experiment with it and learn how to use it safely."

At least to me, many of the existing efforts to use Big Data seem to me interesting--but relatively small potatoes. As the existing data increases 40-fold in the next few years, along with techniques and capabilities to digest and analyze that data, challenges and possibilities will probably emerge that I can't even imagine now.  The May 2012 report quotes the comment from social technology guru Andreas Weigend, who said: "[D]ata is the new oil; like oil, it must be refined before it can be used."


Tuesday, February 19, 2013

Social Welfare Programs and Incentives to Work

There's a fundamental conflict between helping those in need and encouraging self-support. I sometimes say that if you give a person a fish, every day, then you remove that person's incentive to learn to fish. But if you vow not give them a fish, they may starve to death while learning to fish. 

C. Eugene Steuerle explores the current state of this conflict in "Labor Force Participation, Taxes, and the Nation’s Social Welfare System," which is testimony given to the Committee on Oversight and Government Reform of the U.S. House of Representatives on February 14, 2013. As a starting point, focus first on the support that we give to those in need. Steuerle writes:

"Figures 1 and 2 display the benefits available to a single mother with two children in 2011 under these two cases. The first case, what I call the “universal” case, shows the benefits available to anyone whose income was low enough to qualify for them, namely nutrition assistance and tax benefits. The second case adds to those benefits narrower assistance—TANF and housing subsidies and supplements to nutrition assistance—that is available to some households but not to others based on availability, time limits, and other criteria. Because health reform will soon alter the delivery of health benefits in an important way, in both cases I assume that the provisions of the Affordable Care Act are in effect."



It's useful to remember that these graphs do not refer to cash benefits, and they represent averages that will vary across families. For example, the amount that families receive in Medicaid benefits is not received in cash, but in the form of access to health care services, and the amount will vary from year to year, depending on health. I find the details of these figures interesting for what they reveal about the size of spending and support from different programs and the income range over which programs operate. For example, the figures highlight that SNAP, more commonly known as "food stamps," is a substantially larger program than TANF, more commonly known as "welfare." The figures also show the relatively large size of health care benefits like Medicaid, CHIP, and the "exchange subsidy" compared with other forms of benefits, following a pattern that as a society we are willing to pay large health care bills for those with low incomes, or to give them food stamps, but we are less willing to give them cash benefits.

But the main point that Steuerle emphasizes is in the overall hump shape of the curves: that is more support for those at lower incomes, and then declining support as income rises. This pattern makes perfect sense: more fish for those with very low incomes, less fish as people learn to fish and bring in their own income. But it also means that those with low incomes face what economists call a "negative income tax."

A "positive" income tax is the usual tax in which, as you earn additional income, the government taxes a percentage. A "negative" income tax arises when, as you earn additional income, the government phases out benefits it would otherwise have provided. Both kinds of taxes have the same  result on incentives: when you earn an additional marginal dollar of income, you take home less than a dollar after taxes. When social programs phase out quickly as income rises, then a situation can arise where earning an additional dollar of income means losing 50 cents or more in benefits--thus greatly reducing the incentives to work.

Here are the effective marginal tax rates as Steuerle calculates them. That is, adding together both the "positive" tax rates of federal income taxes, state taxes, and payroll taxes for Social Security and Medicare, together with the implicit "negative" tax rates of the phase-out of social programs, what is the effective tax rate on a marginal dollar of income as income rises. Notice how the phase-out of social programs--that is, how their support declines as earned income rises--leads to a spike in the overall "effective" marginal tax rates that people experience at around $10,000-$15,000 in earned income.


Of course, if you're someone who doesn't believe that marginal tax rates affect work effort, then this sort of chart won't bother you. Personally, I'm concerned about the effects of marginal tax rates on incentives not just at the top of the income scale, and not just at the bottom of the income scale, but at all income levels.


Those interested in this subject might also see my post of November 16, 2012, based on a report from the Congressional Budget Office, about  "Marginal Tax Rates on the Poor and Lower Middle Class."


Monday, February 18, 2013

Taking Apprenticeships Seriously

The United States puts a heavy emphasis on a college degree as the path to economic and social success, and thus it's a familiar pledge of politicians that a higher share of the population will attend college. For example, in a speech to Congress on February 24, 2009, President Obama 
set a goal that "by 2020, America will once again have the highest proportion of college graduates in the world."

But this emphasis on college has two difficulties: 1) as a society, we don't actually mean it; and 2) it probably isn't an appropriate goal, anyway. After all, if we really supported a widespread expansion of college education, we would do considerably more than pump up the loans available to students. Instead, we would be figuring out how current colleges can expand their enrollments, and starting a new wave of colleges and universities--and figuring out how to keep these options affordable to students. Instead, the U.S. has lost its lead as the country in the world with the highest proportion of college graduates.

Moreover, a four-year college degree just isn't going to be right for everyone. Think about those students who managed to finish a high school degree, but were in the bottom third or bottom quarter of the class. For many of these students, their interactions with the educational system have not been happy ones, and the notion that their life plan should start off with yet another four years of education is likely to be met with hard-earned dislike and disbelief.

So what's the alternative for these students, in a U.S. economy that places considerable value on skilled labor. Betty Joyce Nash offers one angle on these issues in in "Journey to Work: European Model Combines Education with Vocation  in the Fourth Quarter issue of Region Focus, which is published by the Federal Reserve Bank of Richmond.  She writes:

"In the United States, vocational education has been disparaged by some as a place for students perceived as unwilling or unable. The United States still largely champions college as the route to higher lifetime wages and the flexibility to retool skills in times of economic change. Yet just 58 percent of the 53 percent of college-goers in 2004 who started at four-year institutions finished within six years. Moreover, 25 percent of those who enter two-year community colleges don’t finish. Only about 28 percent of U.S. adults over age 25 actually have a bachelor’s. What about the rest? What’s their path to the workplace? It may be unrealistic to expect everyone to finish college, but most students will need more than a high school education as jobs become more complex."
Nash focuses her discussion on apprenticeships and vocational education, and as is common in these kinds of arguments, she focuses some attention on practices in Germany and Switzerland. Thus:

"Germany and Switzerland educate roughly 53 percent and 66 percent of students, respectively, in a system that combines apprenticeships with classroom education — the dual system. This approach brings young people into the labor force more quickly and easily. Unemployment for those in
Switzerland between the ages of 15 and 24 in 2011 was 7.7 percent; in Germany, 8.5 percent. In the United States that year, the rate was 17.3 percent, down from 18.4 percent the previous year. (A 10 percent higher rate of participation in vocational education in selected Organization for Economic
Cooperation and Development countries led to a 2 percent lower youth unemployment rate in 2011, according to economist Eric Hanushek of Stanford University.)"

Here's a bit more detail on Switzerland and on Germany:

"At ages 15 to 16, in Switzerland, about two-thirds of every cohort enter  apprenticeships, [Stefan C.] Wolter notes. Apprentices in fields from health care to hairdressing to engineering attend vocational school at least one day a week for general education and theoretical grounding for roughly three years. On other days, they apprentice under the supervision of a seasoned employee. What makes the system work so well is firm participation, which is relatively strong. “If you exclude the one-person companies and the businesses that cannot train, about 40 percent of companies that could train do train,” Wolter says. ..."

"In Germany, about 25 percent of students go to university, and apprenticeships employ another 53 percent. At 16,  they sign on for a three-year stint in one of 350 occupations. Another 15 percent may attend vocational schools. Those who are less qualified take a full-time vocational course or temporary job until they land an apprenticeship. About one-quarter of German employers participate. ..."

Other western European countries use variations of the Swiss and German model. Belgium, Finland, Sweden, and the Netherlands train most vocational students in school programs, while Germany, Switzerland, Austria, and Denmark have large school-and-work programs. The United States is an outlier: By international standards and official definitions, it has virtually no vocational education and training program."
From a U.S. perspective, it's hard to think clearly about how this kind of widespread use of apprenticeships and vocational school would even work.  Half or two-thirds of 16 year-old students involved in paid internships? A quarter or a third of all employers providing a large number of such internships as part of their regular business model? Internships across a wide array of professions, both blue- and white-collar? My American mind boggles. But given that a four-year college degree is demonstrably not a good fit for many young Americans, it's past time to take some of the alternatives more seriously.


I've posted from time to time about the merits of apprenticeships and various alternative credentials.

For examples, see this post from October 18, 2011, on "Apprenticeships for the U.S. Economy, "
this post from last November 3, 2011 on "Recognizing Non-formal and Informal Learning," and th is post from January 16, 2012 on "Certificate Programs for Labor Market Skills."

 

 

Friday, February 15, 2013

Obesity and Healthy Snacks

The rise in American rates of obesity can be traced back to what seems like a fairly small rise in daily calories consumed, I learned this lesson from an article on the causes of obesity about 10 years back in my own Journal of Economic Perspectives. In "Why Have Americans Become More Obese?" David M. Cutler, Edward L. Glaeser and Jesse M. Shapiro wrote that the "10- to 12-pound increase in median weight we observe in the past two decades requires a net caloric imbalance of about 100 to 150 calories per day. These calorie numbers are strikingly small. One hundred and fifty calories per day is three Oreo cookies or one can of Pepsi. It is about a mile and a half of walking."

Elizabeth Frazao, Hayden Stewart, Jeffrey Hyman, and Andrea Carlson apply a similar logic in "Gobbling Up Snacks: Cause or Potential Cure for Childhood Obesity?" which appears in the n the December 2012 issue of Amber Waves, published by the Economic Research Service at the U.S. Department of Agriculture. 

When I was a child, my mother had a clear-cut policy on before-dinner snacks: I was allowed to eat all the raw carrots I wanted. The parental philosophy appears to have gone out of date. The USDA economists explain: "Consumption of snacks among children has increased markedly over the last 35 years. In the late 1970s, American children consumed an average of only one snack a day. Today, they are consuming nearly three snacks per day. As a result, daily calories from children's snacks have increased by almost 200 calories over the period." They propose that a shift to healthier snacks could play a useful role in cutting childhood obesity.

As a starting point, here's a table showing the calorie count for some fruit and vegetable stacks, compared with some commonly consumed snack foods. As a parent of three, I confess that I can't see my children snacking on broccoli florets. But grapes, strawberries, cantaloupe, or apples are all possibilities, and even among the snack foods, some choices like popsicles or fruit rolls are at least better than the alternatives.

One common reaction to this sort of list is that the healthy snacks cost too much. It is true that cookies, crackers, and chips make a relatively cheap snack, and some of the healthier choices, like tangerines, grape tomatoes, and strawberries are a costlier snack. But some of the unhealthy choices like muffins and Danish are also on the costly side, while bananas, oranges, and those good old carrot sticks are not especially pricey. Here are their calculations of cost-per-snack.

 Generalizing wildly from personal experience, as social scientists are wont to do, it feels to me as if a cultural shift has occurred about a "suitable" snack. Two of my children were, at different times, in soccer leagues where the parents organized themselves to make sure that each child got a "treat" of a cookie or chips and a large sugared drink after each game. When I coached a different youth soccer team, I brought bags of orange and apple slices for the players. As far as I could tell, the kids were equally happy with the oranges--at least once it was clear that cookies and chips would not be forthcoming.

Consuming healthier (and fewer) snacks could make a real difference to child obesity--and it makes sense for adults, too.


Thursday, February 14, 2013

Maybe Too Big To Fail, but Not Too Big to Suffer

Which financial institutions are "too big to fail"? According to a report from the international Financial Stability Board, a working group of governments and central banks that tries to facilitate international cooperation on these issues, here's the list as of November 2012.

 Ready for a nice bowl of acronym soup? This list is actually the "global systemically important banks," known as the G-SIBs, which are a subcategory of the "global systemically important financial institutions," or G-SIFIs.  Already finalized, as the Financial Stability Board (FSB) explains, are guidelines for the "domestic systemically important banks," the D-SIBs, which national governments are expected to implement by 2016. Meanwhile, the International Association of Insurance Supervisors (IAIS) has proposed a method of deciding who is a "global systemically important insurers," the G-SIIs. The Financial Stability Board (FSB) and the International Organization of Securities Commissions (IOSCO) are now working on a method to idenfity the systemically important non-bank non-insurance financial institutions (no acronym yet available).

Meanwhile, the Financial Stability Oversight Council (FSOC) within the U.S. Department of the Treasury is working on its own lists. In its 2012 annual report, it designated eight systemically important "financial market utilities"--that is, firms that are intimately involved in carrying out various financial transactions. Here's the list: the Clearing House Payments Company, CLS Bank International, the Chicago Mercantile Exchange, the Depository Trust Company, Fixed Income Clearing Corporation, ICE Clear Credit, National Securities Clearing Corporation, and the Options Clearing Corporation. (And I freely  admit that I have only a fuzzy idea of what several of those companies actually do.)

In addition, the Dodd-Frank legislation presumes that U.S. banks are systemically important if their h holdings exceed $50 billion in consolidated assets. David Luttrell, Harvey Rosenblum, and Jackson Thies explain these points, along with a nice overview of many broader issues, in their Staff paper on "Understanding the Risks Inherent in Shadow Banking: a Primer and Practical Lessons Learned," written for the Dallas Fed. For perspective, they offer a list of the largest U.S. bank holding companies, all of which comfortably exceed the $50 billion benchmark for consolidated assets.


This litany of who is "systemically important" feels disturbingly long, and it's only getting longer. But ultimately, it's a good thing to have such lists--at least if they lead to policy changes. Once you have admitted that a number of financial institutions are too big to fail, because their failure would lead to too great a disruption in financial markets, and once you have then made a commitment that government should not bail out such institutions, what policy prescription follows?

The proposal from the Financial Stability Board is that the G-SIBs (global systemically important banks, of course) should face a different set of regulatory rules. As the Dallas Fed economists explain, these could include "higher capital requirements, supervisory expectations for risk management functions, data aggregation capabilities, risk governance, and internal controls." There are two difficulties with this approach. First, it may not work. After all, a considerable regulatory apparatus in the U.S. did not prevent the financial crashes from 2007-2009. And second, it may work with undesired side effects. In particular, if there are heavy rules on one set of regulated financial institutions, then there will be a tendency for financial activities to flow to less-regulated financial institutions. "If regulation constrains commercial banks’ risk taking, many questionable assets may simply migrate to less-regulated entities."

I don't oppose regulating the SIFIs (that would be "systemically important financial institutions") more heavily. But it's important to be clear on the limits of this approach. After all, it's not that these institutions are big, but rather that they are so tightly interconnected with other institutions. As Luttrell, Rosenblum, and Thies explain: "TBTF [that would be "too big to fail] is not just about bigness; it also includes “too many to fail” and “too opaque to regulate."”

It seems to me that the key here is to remember that maybe some institutions are too big to fail, but they aren't too big to suffer! In particular, they aren't too big to have their top managers booted out--without bonuses. They aren't too big to have their shareholders wiped out, and the company handed over to bondholders--who are then likely to end up taking losses as well. One task of financial regulators should be to design and pre-plan an "orderly resolution" as they call it. The trick is to devise ways so that if these systemically important firms run into financial difficulties, the tasks and external obligations of certain large financial firms will not be much disrupted, for the sake of financial stability,but those who invest in those firms and who manage them will face costs.

Wednesday, February 13, 2013

Why Has Health Information Technology Been Ineffective?


Health information technology is one of the methods often proposed to help rein in rising health care costs. The underlying story is plausible: greater efficiency in dealing with the provision of care and the paperwork burden of medicine, and greater safety for patients as providers can be aware of past medical histories and ongoing treatments. However, at least so far, health information technology hasn’t done much to reduce costs.  Arthur L. Kellermann and Spencer S. Jones ask “What It Will Take to Achieve the As-Yet-Unfulfilled Promises of Health Information Technology” in the first issue of Health Affairs for 2013 (pp. 63-68). (This journal is not freely available on-line, but many academic readers will have access through library subscriptions.)

Back in 2005, a group of RAND researchers forecast that rapid adoption of health information technology could save $81 billion annually. Kellermann and Jones essentially ask: Why hasn’t this vision come to pass?  Here are some of their answers (as usual, footnotes are omitted).

Health providers and patients have been slow to adopt information technology. “The most recent data suggest that approximately 40 percent of physicians and 27 percent of hospitals are using at least a “basic” electronic health record. … Uptake of health IT by patients is even worse.”

Existing health information technology systems don't interconnect. “Are modern health IT systems interconnected and interoperable? The answer to this question, quite clearly, is no. The health IT systems that currently dominate the market are not designed to talk to each other. … As a result, the current generation of electronic health records function less as an “ATM cards,” allowing a patient or provider to access needed health information anywhere at any time, than as “frequent flyer cards” intended to enforce brand loyalty to a particular health care system.”

Health care providers dislike the existing information technology systems.“Considering the theoretical benefits of health IT, it is remarkable how few fans it has among health care professionals. The lack of enthusiasm might be attributed, in part, to the sobering results of studies showing that in many cases health IT has failed to deliver promised gains in productivity and patient safety. An even more plausible cause is that few IT vendors make products that are easy to use. As a result, many doctors and nurses complain that health IT systems slow them down.”

Existing health information technology can raise costs. On this point,, the authors cite a New York Times article from last fall by Reed Abelson, Julie Creswell, and Griff Palmer. (Full disclosure, Reed Abelson was a friend of mine back in college days.) The NYT story reports: "[T]he move to electronic health records may be contributing to billions of dollars in higher costs for Medicare, private insurers and patients by making it easier for hospitals and physicians to bill more for their services, whether or not they provide additional care.Hospitals received $1 billion more in Medicare reimbursements in 2010 than they did five years earlier, at least in part by changing the billing codes they assign to patients in emergency rooms, according to a New York Times analysis of Medicare data from the American Hospital Directory. Regulators say physicians have changed the way they bill for office visits similarly, increasing their payments by billions of dollars as well."


Kellermann and Jones end with a plea that health information technology systems should be built o principles of interoperability, ease of use, patient-centeredness. I have no disagreement with the principles, but I would note that even within individual companies, it has often proven quite time-consuming and difficult to integrate information technology into operations in a full and productive way. Thus, it's no surprise to me that the health care industry has faced a number of stumbling blocks. I’ve heard anecdotal stories of doctors spending inordinate amounts of time clicking through menus on some IT system, trying to figure out which boxes to check to best represent a diagnosis and a course of care. I’ve heard that some doctors, as they master the system, find that it becomes easier to bill for many separate small services that they wouldn’t have previously bothered to write up.

It seems that it should be possible for the big health care finance operation, both public and private, to get together and hammer out a basic flexible framework for health care information technology. But it doesn't seem to be happening.