By JEFF GOLDSMITH
How the Search for Perfect Markets has Damaged Health Policy
Sometimes ideas in healthcare are so powerful that they haunt us for generations even though their link to the real world we all live in is tenuous. The idea of “moral hazard” is one of these ideas. In 1963, future Nobel Laureate economist Kenneth Arrow wrote an influential essay about the applicability of market principles to medicine entitled “Uncertainty and the Welfare Economics of Medical Care”.
One problem Arrow mentioned in this essay was “moral hazard”- the enhancement of demand for something people use to buy for themselves that is financed through third party insurance. Arrow described two varieties of moral hazard: the patient version, where insurance lowers the final cost and inhibitions, raising the demand for a product, and the physician version–what happens when insurance pays for something the physician controls by virtue of a steep asymmetry of knowledge between them and the patient and more care is provided than actually needed. The physician-patient relationship is “ground zero” in the health system.
Moral hazard was only one of several factors Arrow felt would made it difficult to apply rational economic principles to medicine. The highly variable and uniquely threatening character of illness was a more important factor, as was the limited scope of market forces, because government provision of care for large numbers of poor folk was required.
One key to the durability of Arrow’s thesis was timing: it was published just two years before the enactment of Medicare and Medicaid in 1965, which dramatically expanded the government’s role in financing healthcare for the elderly and the categorically needy. In 1960, US health spending was just 5% of GDP, and a remarkable 48% of health spending was out of pocket by individual patients.
After 1966, when the laws were enacted, health spending took off like the proverbial scalded dog. For the next seven years, Medicare spending rose nearly 29% per year and explosive growth in health spending rose to the top of the federal policy stack. By 2003, health spending had reached 15% of GDP! Arrow’s moral hazard thesis quickly morphed into a “blame the patient” narrative that became a central tenet of an emerging field of health economics, as well as in the conservative critique of the US health cost problem.
Fuel was added to the fire by Joseph Newhouse’s RAND Health Insurance Experiment in the 1980s, which found that patients that bore a significant portion of the cost of care used less care and were apparently no sicker at the end of the eight-year study period. An important and widely ignored coda to the RAND study was that patients with higher cost shares were incapable of distinguishing between useful and useless medical care, and thus stinted on life-saving medications that diminished their longer term health prospects. A substantial body of consumer research has since demonstrated that patients are in fact terrible at making “rational” economic choices regarding their health benefits.
The RAND study provided justification for ending so-called first dollar health coverage and, later, high-deductible health plans. Today more than half of all Americans have high deductible health coverage. Not surprisingly, half of all Americans also report foregoing care because they do not have the money to pay their share of the cost!
However, a different moral hazard narrative took hold in liberal/progressive circles, which blamed the physician, rather than the patient, for the health cost crisis.
The Somers (Anne and Herman) argued that physicians had target incomes, and would exploit their power over patients to increase clinical volume regardless of actual patient needs to meet their target income. John Wennberg and colleagues at Dartmouth later indicted excessive supply of specialty physicians for high health costs. Wennberg’s classic analysis of New Haven vs. Boston’s healthcare use was later shredded by Buz Cooper for ignoring the role of poverty in Boston’s much higher use rates.
The durable “blame the physician” moral hazard thesis has led American health policy on a futile five-decade long quest for the perfect payment framework that would damp down health cost growth–first capitation and HMOs, then, during the Obama years, “value-based care”–a muddy term for incentives to providers that will eliminate waste and unnecessary care. Value-based care advocates assume that physicians are helpless pawns of whatever schedule of financial rewards is offered them, like rats in a Skinner box. If policymakers can just get the “operant condition schedule” right, waste will come tumbling out of the system.
The end result of this narrative: thanks in no small part to the festival of technocratic enthusiasm that accompanied ObamaCare, (HiTECH, MACRA, etc), physicians and nurses now spend as much time typing and fiddling with their electronic health records to justify their decisions as they do caring for us. Controlling physician moral hazard thru AI driven claims management algorithms has become a multi-billion business. The biggest “moral hazard mitigation” company, UnitedHealth Group, has a $500 billion market cap.
Thus the poisonous legacy of Arrow’s “moral hazard” thesis has been two warring policy narratives that blame one side or the other of the doctor-patient relationship for rising health costs. It has given us a policy conversation steeped in mistrust and cynicism. You can tell if someone is a progressive or conservative merely by asking who they blame for rising health costs!
There were credible alternative explanations for the post-Medicare cost explosion. Recall that the point of expanding health coverage in the first place was that better access to care DOES in fact improve health. Medicare lifted tens of millions of seniors out of poverty, improving both their nutrition and living conditions. Medicaid dramatically broadened access to care for tens of millions in poverty. This expansion of coverage, and the added costs, deserve much credit for the almost nine year improvement in Americans’ life expectancy from 1965 to 2015.
It is also worth recalling that the two most explosive periods of inflation in the post-WWII US economy were the late 1960’s, the so-called Guns and Butter economy that financed the Vietnam War, and the mid 1970’s to 1981, fueled by the Arab oil embargo. These periods of hyperinflation coincided with the coverage expansion, amplifying their cost impact.
And of course, the 1980’s also saw a flood of optimistic, high energy baby boomer physicians, the result of a dramatic federally funded expansion of physician supply begun by Congress during the 1970’s. The reason for this surge: we did not have enough physicians to meet the demands of the newly enfranchised Medicare and Medicaid populations.
This surge in aggressive young physicians coincided with dramatic expansion in the capabilities of our care system. Non-invasive imaging technologies such as MR, CT and ultrasound, ambulatory surgery dramatically lowered both the risks and costs of surgical care. The advent of effective cancer treatments, cut the cancer death rate by one-third from its 1991 peak. The advent of statins, and less invasive heart treatment has reduced mortality from heart disease by 4% per year since 1990, despite the rise in obesity!
Medicine today is of an different order of magnitude of clinical effectiveness, technical complexity and, yes, cost, than that on offer in 1965. No one would trade that health system for the one we have today.
However, the biggest problem with the moral hazard theses–both of them–was the assumption that the physician and the patient are primarily motivated by “maximizing their utility” in the healthcare transaction. Arrow knew better. He emphasized the role that fear and existential risk played in their interaction, given that illness, particularly serious illness, is, as he put it, “an assault on personal integrity”. Given the level of personal risk, it is easy to understand why both patients and physicians will not be obsessing about the risk/benefit relationship of every single medical decision.
By reducing the physician-patient interaction to a morally fraught mutual quest of the proverbial free lunch, economists have not only insulted both parties, but grossly oversimplified this complex interaction. Is a sick person really “consuming” medical care, like an ice cream bar or a movie? Is the physician really “selling” solutions regardless of their effectiveness, unconstrained by pesky professional ethics, or rather groping through fraught uncertainty to apply their knowledge to helping their patient recover?
In contrast to virtually every other Western country, American health policy has been obsessed for nearly sixty years with fighting moral hazard and, in the process, saddling almost 100 million Americans with $195 billion in medical debts (the vast majority of which are uncollectible). Isn’t it ironic that those other wealthy countries that provide their citizens care free at the point of service spend between 30-50% less per capita on healthcare than we do? And that both physician visits and hospitalization rates are far lower in the US than most of these countries.
There is no question that healthcare in the US today is very expensive. But health costs have been dead flat as a percentage of US GDP for the past thirteen years. The explosive growth in health costs is over. Increasingly, attention is turning to the real culprit–socially determined causes of illness, and the inadequacy of our policies toward nutrition, shelter, mental health, gun violence and investment in public health. It’s time for the economists to eat some humble pie, and acknowledge that medicine will probably never fit in their cartoon universe of “Pareto optimality in perfect markets”.
Jeff Goldsmith is a veteran health care futurist, President of Health Futures Inc and regular THCB Contributor. This comes from his personal substack
Credits: thehealthcareblog