Iatrogenics OR When Doing Nothing Might Be the Best Alternative

i·at·ro·gen·ic /īˌatrəˈjenik/
Relating to illness caused by medical
examination or treatment.
— Google Definitions

I learned about the word iatrogenic when reading the book Writing to Learn by William Zinsser. The book, written in 1984, used the following passage as an example of medical writing. It talks about the link between medical prescriptions and opium addiction:

The medical profession has a long record of treating patients with useless or harmful relatives, often in clinical settings of complete mutual confidence. Iatrogenic diseases, complications and injury have been, in fact, common in the history of medicine. Only look upon addiction to certain dispensed drugs as one variation among the occasional effects of drug therapy.

I thought, “What an interesting new word!” as did Zinsser who also had to look it up. Then I came across Nicholas Nassim Taleb’s book Antifragile and found that he also fell in love with the word and expanded the idea into a class of issues that he called iatrogenics that went beyond medicine.

Iatrogenics are different from malpractice. Malpractice is doing an operation wrong. Iatrogenics is about doing a treatment correctly but it still having harmful side effects. When doctors ignore these side effects, they are far more likely to use all the tools at their disposal, like drugs or surgery,  whether or not it’s a good idea in the long term.

Let’s look at a recent example. The New York Times recently published Heart Stents Are Useless for Most Stable Patients. They’re Still Widely Used. While they have no medical benefit, putting in a stent makes both doctors and patients feel like they are doing something — that they are in control. And, from both points of view, “they seem to work,” even though they don’t work any better than a placebo.

So what’s the harm in that? Everyone’s happy aren’t they? Well no, they’re not. Doctors are performing an operation that does no better than a placebo so there’s no upside. However, there’s a significant downside in the complications from the operation.

Or take another example from a cruise I went on. Cruises offer Wi-Fi on the ship with tiny data limits (50MB for the whole trip). This is so small that just opening my phone will go over this limit. So a cruise director offered, “Give me your phone and I’ll make it work on the boat.” So I gave him the phone and he starting turning off these data hogging applications. A few months later I realized that one of the things he turned off was my iCloud backup. So the decision that the cruise director made, without telling me, was to give me very limited internet functionality on the boat while turning off my critical backup capability.

Another way of looking at iatrogenics is overvaluing of short term gains vs. long term risks. Take the example of Thalidomide, the poster child for drug overuse. Thalidomide was a sedative that was prescribed around 1960. While it helped women with morning sickness (a relatively minor problem) it caused tens of thousands of serious birth defects.

Indulge me with one more example. When George Washington had left the presidency he’d taken ill. His treatment was the standard for the day — bleeding. However, taking 5 to 7 pounds of blood from Washington’s body is now widely believed to accelerate his death. Bleeding stayed around for a while after that. It was still recommended by leading doctors as late as 1909.

Taleb tells one story of how this problem goes beyond medicine and into finance:

One day in 2003, Alex Berenson, a New York Times journalist, came into my office with the secret risk reports of Fannie Mae, given to him by a defector. It was the kind of report getting into the guts of the methodology for risk calculation that only an insider can see—Fannie Mae made its own risk calculations and disclosed what it wanted to whomever it wanted, the public or someone else. But only a defector could show us the guts to see how the risk was calculated.

We looked at the report: simply, a move upward in an economic variable led to massive losses, a move downward (in the opposite direction), to small profits. Further moves upward led to even larger additional losses and further moves downward to even smaller profits.

At its core, this was what caused the financial crisis. It was people adding more and more risk for smaller and smaller gains. They failed to look at the downside risks which kept growing larger and larger because they couldn’t imagine that they would occur.

Oddly enough, people don’t get in trouble for doing this. There’s a general sense that the people causing the problems were doing the best they could. The idea of “this is the best modern medicine (or modern finance) has — even if it doesn’t work” is well accepted. This is true even when the procedure is successful but the patient died or the economy collapsed.

A lot of this happens because the people making the decisions don’t have skin in the game. They get the upside benefits without being exposed to the downside risk. Taleb mentions that when Roman engineers built a bridge, they were required to sleep under it. Then, if the bridge fell down, the engineers would feel the pain (or death in this case) of the people who were hurt by the bridge.

So what can you do about all this? Try to get your doctor to put a little skin in the game. The next time you have an important medical decision to make, don’t ask your doctor for her medical opinion, ask her what she would do if she were in your place. This changes her mindset from a “disinterested professional” to someone with a personal stake in the game. You might get a very different answer.

Read this along with my story on back pain.