“Eroom’s Law: the cost of developing a new drug doubles every nine years”

Arnst Drug Approval 610x408

Eroom’s Law is Moore’s Law spelled backwards. Sadly it describes the reality of declining drug approval rates. Diagnosing the decline in pharmaceutical R&D efficiency was published in Nature Reviews Drug Discovery March 2012. The abstract: 

The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (R&D). Yet the number of new drugs approved per billion US dollars spent on R&D has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms. There have been many proposed solutions to the problem of declining R&D efficiency. However, their apparent lack of impact so far and the contrast between improving inputs and declining output in terms of the number of new drugs make it sensible to ask whether the underlying problems have been correctly diagnosed. Here, we discuss four factors that we consider to be primary causes, which we call the ‘better than the Beatles’ problem; the ‘cautious regulator’ problem; the ‘throw money at it’ tendency; and the ‘basic research–brute force’ bias. Our aim is to provoke a more systematic analysis of the causes of the decline in R&D efficiency.

For some commentary on the Scannell et al paper, this is useful Why Drug Development is Failing – and How to Fix It. BTW Derek Lowe is my favorite observer of pharma research – I highly recommend In the Pipeline.

The new drug drought was highlighted in January by Derek Lowe, a pharmaceutical scientist who writes the influential blog In the Pipeline. He asked his readers to name the most worthwhile new drug that had been introduced since 1990. Of the many candidates nominated, the vast majority were brought to market in the first half of that 20-year span.

One reason for the industry’s meager R&D productivity is the sheer complexity of the human body, argue four analysts at Sanford C. Bernstein, led by Jack W. Scannell. In their article in Nature Reviews Drug Discovery, “Diagnosing the Decline in Pharmaceutical R&D Efficiency,” they examined R&D projects for more than 28,000 compounds investigated since 1990. During that 20 year period the pharma industry increasingly concentrated its R&D investments on drugs that address unmet therapeutics needs and untargeted biological mechanisms—areas where the need is great but the risk of failure highest. This is the widely-held “low hanging fruit” theory of the drug drought: the easier disease targets, such as high cholesterol, asthmatic airway passages, migraines, and ulcerous digestive systems, have been met. Complex diseases such as cancer and neuro-degenerative conditions are much harder to solve.

But Scannell and his colleagues also laid out four additional, interlocking arguments that may explain the decline in R&D output:

  • The ‘better than the Beatles’ problem: Imagine how hard it would be to come up with a successful pop song if any new song had to be better than the Beatles . Unlike cars, or electronics, with drugs there’s no interest in novelty for its own sake. And there’s no point in creating something that’s only just as good as what’s already available, especially since today’s hit drug is tomorrow’s inexpensive generic.
  • The ‘cautious regulator’ problem: The progressive lowering of risk tolerance, particularly after the pain treatment Vioxx was removed from the market in 2004 for safety reasons, raises the bar on safety for new drugs, which makes R&D both costlier and harder.
  • The ‘throw money at it’ tendency: The tendency to just keep pouring more money and resources into a research project or a widely-held theory until something sticks. Could also be called throwing good money after bad.
  • The ‘basic research-brute force’ bias: The industry’s tendency to overestimate the probability that advances in basic research and large scale screening processes will show a molecule safe and effective in clinical trials.

As an outsider I find it easy to place a lot of the blame on the ‘cautious regulator’ problem. A similar disease afflicts the US nuclear power industry. A standout example of the impact on drug development is the near impossibility of gaining approval of new drug “cocktails”. The ‘personalized medicine’ concept exploits our ability to combine very fast sequencing of the patient’s DNA with exploding ‘big data’ containing detailed cases of patients-symptoms-drugs-outcomes. Sadly it’s nearly impossible to get such drug-combinations approved. 

Scannell and his fellow authors throw water on the personalized medicine theory by pointing out that despite the shift to targeted drugs and high tech screening tools, the probability that a small-molecule drug will successfully complete clinical trials has remained almost constant for the past 50 years. And those treatments that do succeed can cost patients and insurers hundreds of thousands of dollars per year, because they will by definition only work on the small number that have the cellular target. Physicians who prescribe drugs and the scientists who invent them are increasingly embracing a more nuanced view of drug discovery, the idea that most diseases require a combination of targeted drugs, often called a cocktail, to be held in check. The cocktail approach proved effective against AIDS, and medical experts believe the same approach may be necessary for cancer, Alzheimer’s, and a range of other diseases.

The problem with cocktails, however, is that it can be difficult if not impossible for two different companies to test experimental drugs in concert, for both competitive and safety reasons. Companies are beginning to overcome those competitive challenges, however, and collaborate on some of the most difficult challenges in medicine, most notably Alzheimer’s disease, the only one of the top 10 causes of death in the U.S. with no known cause, cure or even a way of slowing its progression. In 2004 the National Institutes of Health, the FDA and 20 drug companies joined forces to start the Alzheimer’s Disease Neuroimaging Initiative (ADNI), a landmark public-private partnership tasked with mapping all the biological markers connected to Alzheimer’s. The ADNI’s defining principle is to publicly share and relinquish ownership of all data and findings as soon as possible. More than 57 sites are collecting data from thousands of patients, and the results to date have already been incorporated into research and trials by pharmaceutical companies.

Antibiotic-resistance: we need better incentives

(…) And going to the hospital has itself become alarmingly risky. Already, 1.7 million people in the U.S. acquire infections in the hospital each year, resulting in 99,000 deaths, according to the Centers for Disease Control and Prevention.

(…) “A lot can happen in the several days that it takes for the doctor and the patient to determine that the first antibiotic that was given didn’t work,” Mellon said.

We were traveling and thus missed Megan McArdle’s Ocober 2011 analysis. I highly recommend a careful read to reflect on the scale of the problem and some possible policy solutions. There are a number of problems contributing to poor investment incentives plus poor incentives to maximize the utility of new molecules.

(…) The problem is, efforts at promoting conservation may discourage innovation—and vice versa. Some hospitals now require infectious-disease doctors to sign off on the use of newer and more powerful antibiotics. But this has a cost. “When a new antibiotic comes out,” Pfizer’s Utt says, “physicians don’t necessarily use it—they tend to hold it in reserve. So by the time it’s being used, it’s already used up part of its marketable patent life.” As a result, fewer large firms may want to spend the time and money to get these drugs approved—according to the IDSA, only two major drug companies (GlaxoSmithKline and AstraZeneca) still have strong active research programs, down from nearly 20 in 1990. Antibiotics are not big moneymakers: Every time a doctor writes a prescription for Lipitor, Pfizer may gain a customer for decades. But short-course drugs like antibiotics sell perhaps a dozen doses.

(…) Those same critics suggest that perhaps we should take this out of the invisible hands of the market. Historically, we’ve solved tragedy-of-the-commons problems either through privatization, as Britain did with its land, or through nationalization, as many nations have done with their military and police. If the market doesn’t work, why not try the government?

Even many libertarian types agree that the commons problem seems to call for stronger state controls over antibiotics. But how far should that go? Government and academia perform vital basic research, but they haven’t delivered a lot of working drugs. “What would be nice,” says Daemmrich, “would be to have free-market mechanisms reward new-drug discovery even as the use of antibiotics was limited to infections that don’t go away on their own.”

One possibility is to have the government buy all the antibiotics on a sliding scale: so many billion dollars for a first-in-class antibiotic, half that amount for a second-in-class, and so forth. The government could then restrict the antibiotic’s use. I’ve posed this possibility to people at pharmaceutical companies and gotten a surprisingly warm reception. Another idea, proposed by Outterson and a colleague, Harvard’s Aaron Kesselheim, is to change the reimbursement system so that companies get paid more when fewer of their drugs are prescribed, as part of a conservation plan. “Let’s say Bayer had a diagnostic test that could quickly tell whether you had a bacterial or viral infection. Right now, the only thing that this would do is knock down their unit sales [of antibiotics]. We should reward companies like Bayer if they bring out a diagnostic like this—their unit sales might decrease by half, but if so, we should quadruple their unit price.” Or we could have special rules for antibiotics patents: instead of a 20-year term, make them renewable annually for drug companies that promote conservation.

These ideas sound elegant and simple in a magazine article. In the real world, they’d be messy and controversial. The government would be getting into the business of fixing prices. Likely, it would overshoot, handing windfall profits to firms, or undershoot, leaving us without enough drugs to treat emerging resistant infections. But the potential for such mistakes shouldn’t stop us from trying to pursue creative public-private solutions. We just need to be prepared to face a lot of yelling.

Especially since the way to reward conservation is not entirely clear. Laxminarayan notes, “Whether resistance develops is not entirely a function of what the manufacturer does—it’s a function of what other manufacturers do as well.” Not to mention doctors, and patients, not all of whom are, ahem, entirely compliant.

If you are not totally depressed, read the June 14, 2011 McArdle analysis “How Superbugs Will Affect Our Health Care Costs. That article is based on “The ‘return of our old enemies in an untreatable form’” by the Remapping Debate. Please read both articles for discussion of the following two figures — these two trends can only end very badly:

Note that the first chart does not include the resurgence of multidrug resistant (MDR) and extensively drug resistant (XDR) tuberculosis.

New Drugs Cost Even More Than You Think

R&D constant dollar graph.png

The depressing figure above was referenced in McArdle’s “Pharma Spending Less on Finding New Drugs“; Copyright The Boston Consulting Group. NME’s per $B R&D spent (constant dollars), where NME’s are New Medical Entities.

There is a very helpful article where Megan McArdle examines recent studies of the cost of new approvable drug discovery. That US$ 1 billion I’ve been using is low by 4 to 12 times:

(…) The standard figure for drug discovery thrown around by the industry’s most avid critics is the Light and Warburton estimate of roughly $43 million. Most serious analysts think that’s way too low (I agree–their assumptions were bizarre, and their attempt to defend them in the comments to this Tim Noah piece is painful to read).

The industry, and its supporters, prefer Joseph DiMasi’s figure of around $800 million. But critics point out that it was derived using confidential data, which can’t be verified, and they are very critical of the method, which includes opportunity costs–the returns that pharmaceutical firms didn’t earn by spending the money elsewhere.

Now along comes a new method, from Matthew Herper at Forbes. It uses only public, audited data, and it’s breathtakingly simple: over a 15-year period, they divided each company’s R&D spend by the number of drugs they got approved. The result: DiMasi is also way too low. For every approved drug, pharma spent between $4 billion and $11 billion on R&D. Yes, there’s probably some wiggle room on the accounting, but not that much–your auditor is not going to let you reclassify your new delivery trucks, or a Human Resources SVP, as a research expense.

As Herper points out, this isn’t necessarily a vindication of pharma–one could demand to know why they have to spend so much money to develop new drugs. Yes, I know, it’s getting harder to find approvable new drugs, but the industry has been flailing for ten years, and so far, the only answer they have hit on seems to be “more layoffs!” Maybe they’re just trapped in a bad place, but since the layoffs clearly aren’t working, I sure hope they come up with something else.

Still, it’s a useful corrective to the notion that pharma just wanders down to the university labs once a year to harvest the new drugs, then spends the rest of the year sitting back and idly watching the royalty checks pour in through the mail slot. Finding an approvable new drug is a long, expensive process that too often goes awry–and often, the rules we impose make things worse, and even tax policy. We should think about these numbers every time someone like Marcia Angell suggests that really, Big Pharma barely does anything. Unfortunately, Big Pharma is doing a lot, although not necessarily effectively as they could. Even more unfortunately, a dry pipeline hurts us at least as much as it hurts them.

SRT1720: Good (And Confusing) News for Obese Mice

An insider analysis of a new Nature Scientific Reports paper by Derek Lowe:

Readers of this blog will be fairly familiar with the long, interesting story of sirtuin activators. Today we will speak of SRT1720, of which we have spoken before. This molecule was described in 2007 as an activator of Sirt1 with beneficial effects in rodent models of diabetes. But both of those statements were called into question by a series of papers which found difficulties with both the in vitro and the in vivo results (summarized here). The GSK/Sirtris team fired back, but that paper also served as a white flag on the in vitro assay questions: there were indeed artifacts due to the fluorescent peptides used. (Another paper has since confirmed these problems and proposed an off-target mechanism).

But that GSK response didn’t address the in vivo assay questions at all – we still had a situation where one group said that these compounds (SRT1720 in particular) were beneficial, and another said that it showed no benefit and was toxic at higher doses. Adding to the controversy, another paper appeared late last year that went back to nematodes, and found the SRT1720 did not extend their lives, either. The state of this field can be fairly described, then, as “extremely confused”.

Now we have a new paper whose title gets right down to it: “SRT1720 improves survival and healthspan of obese mice”. First time I’ve seen “healthspan” as a word, I might add, and another interesting sidelight is that this appears in Nature Scientific Reports, the publishing group’s open-access experiment. But now to the data:

{snip all the meat}

[From SRT1720: Good (And Confusing) News for Obese Mice]

Health Care Reform and the Drug Industry: How Goes It?

Derek Lowe examines what has actually come out of the Obama administration:

(…) Even without any backtracking on exclusivity, the article maintains that health care reform was a loser for the drug industry. The author goes on on to detail the various other costs of the bill as it was passed, and then gets to the biggest structural problem:

While the healthy part of the pharmaceutical market will be pounded, the government-run segment of the market, Medicaid, will be expanded by 16 million patients. Medicaid has the worst pricing structure and the worst track record in paying for innovations of any sector in the United States market. Like government health-care systems around the world, Medicaid must be dragged to pay for medical advances. Unlike employers and seniors in Part D, Medicaid patients cannot vote with their feet if their health plan does not provide the new medicines they want. The incentives in Medicaid all run against paying for pharmaceutical innovations.

So, Obamacare significantly expands the worst sectors of the pharmaceutical market while degrading the best.

Read the whole thing »