Health FDA Reform, Pharmaceuticals
July 14th, 2015 27 Minute Read Public Filings by Peter W. Huber

Testimony by Peter Huber Before the Senate Subcommittee on Space, Science and Competitiveness

The FDA’s mission as set out in statutory language written over 50 years ago, is to see to it that drug companies generate and doctors receive on the FDA-approved label that accompanies every drug, the information they need to prescribe the drug safely and effectively. For the most part, the agency continues to use drug trial protocols established in the 1960s, well before pharmacology developed the diagnostic and tools for designing precisely targeted drugs that make precision medicine possible.

The clearest evidence that the FDA has not kept up with the advances in the science and technology of precision medicine is that it is losing its grip on how drugs are prescribed. In steadily growing numbers, doctors that specialize in the treatment of complex diseases are taking the initiative, using the state-of-the art technologies and analytical tools to develop the science themselves, and relying on their own analyses and databases to guide the safe and effective prescription of drugs to their patients.

And that fact alone points to a serious problem. Doctors can’t take the lead in working out how to prescribe a drug to the right patients until the drug has been approved. Which under the existing statutory language means that the drug first has to perform well in FDA-approved clinical trials. But to perform well in a clinical trial a drug has to be prescribed to the right patients.

It has become clear in recent years that traditional symptom-based definitions of diseases that are used to frame most clinical trials ignore what matters most in modern pharmacology—the same symptoms can be caused by a cluster of different molecular processes, and a precisely targeted drug can only control one of them. A drug’s efficacy and safety can also depend on a wide range of other molecular factors that are hard to identify in advance. We still speak of “developing a drug,” but “developing the patients” would be more accurate. Both matter, of course—pharmacology isn’t a science of one hand clapping—but all the complex biochemical details lie on the patients’ side of the applause.

Oncologists have led the way in recognizing the limitations of the FDA’s drug-approval process. In 2007, the Cancer Biomarkers Collaborative (CBC), a coalition of cancer experts drawn from the American Association for Cancer Research, the FDA, and the National Cancer Institute, started investigating the “growing imperative to modernize the drug development process by incorporating new techniques that can predict the safety and effectiveness of new drugs faster, with more certainty, and at lower cost.” A summary of the conclusions published by the CBC in 2010 noted that “traditional population-based models of clinical trials used for drug approval are designed to guard against bias of selection, which may form the antithesis of personalized medicine, and accordingly, these trials expose large numbers of patients to drugs from which they may not benefit.”

Other medical disciplines are following oncology’s lead. Two years ago, for example, the National Institute of Mental Health (NIMH), the world’s largest funder of mental health research, announced that it was “re-orienting its research” away from the disease categories defined by psychiatrists in their Diagnostic and Statistical Manual of Mental Disorders. Henceforth NIMH funded researchers will be encouraged to search for molecular pathways that transcend the symptom-based categories. In the words of the NIMH’s director “patients and families should welcome this change as a first step towards ‘precision medicine,’ the movement that has transformed cancer diagnosis and treatment.”

Other diseases are being analyzed in similar ways. The National Institutes of Health’s Accelerating Medicines Partnership recently announced a $230 million, five-year plan to collaborate with ten big drug companies and eight non-profit organizations focusing on specific diseases, to unravel the molecular pathways that lead to Alzheimer's, Type 2 diabetes, rheumatoid arthritis, and lupus—and to investigate new methods to track a disease’s progress that could provide early reads on how a drug is affecting it. The objective is to “ensure we expedite translation of scientific knowledge into next generation therapies.” A Pfizer representative emphasized that the Alzheimer’s project will focus on developing a better understanding of the molecular pathways and networks that propel the disease. It will also include searches for molecular factors that can be used to develop drugs that intervene much earlier, intercepting diseases before they become irreversible and untreatable.

The advent of tools to unravel the molecular pathways of diseases, and drugs precisely designed to target them have called into question the conventional symptom-based medical taxonomy of diseases, and thus, indirectly, the central role it still plays at the FDA. In 2011, a task force convened by the National Research Council (NRC) released Toward Precision Medicine, a report written at the request of the NIH to address the need for “a ‘New Taxonomy’ of human diseases based on molecular biology.” We do indeed need one, the report concludes, and to facilitate its development, the report recommends the creation of a broadly accessible “Knowledge Network” that will aggregate data spanning all molecular, clinical, and environmental factors that can affect our health. Working out the molecular etiology of complex diseases will require an analysis of “biological and other relevant clinical data derived from large and ethnically diverse populations” in a dynamic, learn-as-you-go collaboration among biochemists, clinical specialists, patients, and others.

The report also includes an illustration of how we currently rely on dumb luck to help drugs that target complex disorders stumble their way through the FDA’s testing protocols. In 2003 and 2004 the FDA granted accelerated approval to two drugs, Iressa and Tarceva, on the strength of their dramatic therapeutic effects in about one in ten non-small-cell lung cancer patients. Over the course of the next two years the drugs were prescribed to many patients whom they didn’t help, and several follow-up clinical trials seemed to indicate that the drugs didn’t work after all—probably, we now know, “because the actual responders represented too small a proportion of the patients.” Meanwhile, the report continues, the molecular disassembly of lung cancer had begun its explosive advance. In 2004, researchers identified the specific genetic mutation that activates the EGFR enzyme that these two drugs inhibit. “This led to the design of much more effective clinical trials as well as reduced treatment costs and increased treatment effectiveness.” Under current, blinded trial protocols, however, such launches often depend on luck and circular science. The original clinical trial happens to include just enough of the right patients to persuade the FDA to license the drug. The fortuitously and just barely successful completion of the first clinical trial then starts the process that may ultimately supply the information that, ideally, would have been used to select the patients to include in that first trial.

In early 2005 Iressa became the first cancer drug to be withdrawn from the U.S. market after the required follow-up trials failed to confirm its worth to the FDA’s satisfaction. After further trials failed to establish that Iressa extends average patient survival, and serious side effects surfaced in some patients, the manufacturer halted further testing in the United States.

We do however, know that Iressa survival times and side effects vary widely among patients. And we have a pretty good idea why. As Bruce Johnson, a researcher at Boston’s Dana-Farber Cancer Institute and one of the doctors involved in the original Iressa trials, remarked in 2005, “For us as investigators, at this point, there are at least 20 different mutations in the EGF receptors in human lung cancers, and we don’t know if the same drug works as well for every mutation … which is why we want as many EGFR inhibitor drugs available as possible for testing.”

When the FDA rescinded Iressa’s license, it allowed U.S. patients already benefiting from its use to continue using it. One such patient who started on Iressa in 2004, when he had been given two to three months to live, was still alive eight years later, and walking his dogs several miles daily. Rare cases like his have no influence at the FDA but are of great interest to doctors and researchers. In 2013, the National Cancer Institute (NCI) announced its Exceptional Responders Initiative. Four major research institutions are analyzing tissue samples, collected during clinical trials of drugs that failed to win FDA approval, to identify biomarkers that distinguished the minority of patients who did respond well, from the majority who did not. The analysis of roughly a decade of prior trials in the first year of the study identified about 100 exceptional responders. As of March 2015, more than 70 cases have been provisionally accepted for further analysis, with hundreds more anticipated. Accepted tumor tissue samples “will undergo whole-exome, RNA, and targeted deep sequencing to identify potential molecular features that may have accounted for the response.” When the molecules that distinguish the exceptional responders align with what the drug was designed to target, these findings could well lead to the resurrection of drugs that might have helped many patients over the last decade.

In one such trial the drug failed to help over 90 percent of the bladder cancer patients to whom it was prescribed. But it did wipe out the cancer in one 73-year old patient. A genetic analysis of her entire tumor revealed a rare mutation that made her cancer sensitive to the molecular pathway that the drug modulates. Similar mutations were found in about 8 percent of the patients, and the presence of the mutation correlated well with the cancer’s sensitivity to the drug. Similar analyses of a decade of other trials have identified about 100 exceptional responders and could well lead to the reexamination and approval of drugs that could have started saving many lives years ago.

Which brings us back to why doctors who specialize in treating complex diseases are increasingly confident that they should work out how to practice precision medicine independently, without relying on FDA-approved labels. In brief, it comes down to two things. Researchers have developed the tools needed to work out the details of how molecular processes that go wrong deep inside our bodies spawn and propel diseases. And drug designers have developed a remarkable array of tools to design precisely targeted drugs that can disable or control those pathways.

New devices now make it quite easy to collect large amounts of genetic and other medically relevant data from many people. Amazon and Google are reportedly in a race to build the largest medically focused genomic databases. According to Google’s genomic director of engineering, Google aims to provide the best “analytic tools [that] can fish out genetic gold—a drug target, say, or a DNA variant that strongly predicts disease risk—from a sea of data.” Academic and pharmaceutical research projects are currently the company’s biggest customers, but Google expects them to be overtaken by clinical applications in the next decade, with doctors using the services regularly “to understand how a patient's genetic profile affects his risk of various diseases or his likely response to medication.”

Medicine will also benefit from the fact that the statistical tools needed to unravel causal pathways from complex datasets are of great interest in other sectors of the economy as well. The “overarching goal” of the “Big Mechanism” program recently launched by the Defense Department’s Advanced Research Project Agency (DARPA) is to develop methods to extract “causal models” from large, complex datasets and integrate new research findings “more or less immediately … into causal explanatory models of unprecedented completeness and consistency.” To test these new technologies DARPA has chosen to focus initially on “cancer biology with an emphasis on signaling pathways.” It’s a good call, and excellent news for oncology. Viewed from a data analytics perspective, the variability, complexity, and adaptability of cancer cells and terrorists have much in common.

Drug companies rely on our ability to expose disease-causing molecular chain reactions to identify key targets that if disabled or controlled by drugs will cure the disease. The tools currently used to design precisely targeted drugs have been widely used in developing effective later stage treatments and clearly have the potential to identify and take control of the factors that launch diseases at the outset. Many serious disorders develop slowly however, and there is little doubt that successful interventions at a very early stage will often be the best, sometimes the only, and almost always the most cost-effective way to beat them. The development of effective cures will depend on tracing their causes back to their molecular origins and addressing the root causes of the disease rather than attempting to treat the symptoms that surface much later.

The tracing is already well underway. We know that the genetic seeds of many disorders are planted at the time of conception and lie dormant inside our bodies for many years before they start morphing into lethal diseases. An array of tumor suppression and DNA repair genes, for example, protect most of us from cancer for most of our lives. Hereditary variations in those genes affect how well they perform, and some are strongly linked to the development of specific cancers—breast, skin, or colon cancer, for example—or, in some rare cases, a propensity to develop cancers throughout the body.

Now emerging are gene therapies that offer a broad range of radically new medical interventions. Researchers have recently mastered powerful and flexible methods for selectively adding, deleting, or replacing genes in a live cell’s genome. These tools can do in weeks what often required months or years of work using previous gene editing tools. And a new family of “RNA interference” drugs have the potential to regulate gene expression and thus take direct control of genes involved in the earliest stages of disease development. Most gene therapies are still in the investigational stages of development. But their feasibility and great promise is no longer in doubt. And no other currently known process has the potential to provide complete cures for the many rare but often deadly disorders caused by hereditary genetic mutations.

The next step could well be vaccine-like treatments that provide protection before cancers and other disorders start to develop. Researchers are investigating a number of different vectors for reprogramming the genetic code of cells inside a patient’s mature tissues and organs. In early trials, for example, young adults blinded by a rare genetic flaw experienced significant visual improvements soon after a viral vector was used to insert a healthy version of the gene directly into their retinal cells. Similar procedures are reportedly being developed to treat cystic fibrosis, brain cancer, and muscular dystrophy.

Genetic therapies administered early enough to replace pathological variations in gene repair and tumor suppression genes could offer many people a significant, lifelong reduction in their risk of succumbing to what is currently the second most common cause of death in the United States. Rare variations in a single gene make some people prone to develop very high levels of cholesterol and suffer heart attacks in their teens. A more common variation in the gene has the opposite effect, and researchers are investigating the possibility of reprogramming cells to replace the high cholesterol versions of the gene with the low-cholesterol versions. The HIV retrovirus pries its way into our immune system cells by latching one of two proteins on the cells’ surfaces. A recent trial demonstrated the therapeutic potential of genetically engineering a patient’s own immune-system stem cells to replace or disable the gene that codes for the HIV-entry protein. In the words of one of the doctors involved in the trial “This study shows that we can safely and effectively engineer an HIV patient’s own T cells to mimic a naturally occurring resistance to the virus, infuse those engineered cells, have them persist in the body, and potentially keep viral loads at bay without the use of drugs.”

While NIH researchers, doctors, and drug companies have demonstrated their confidence in relying on the analysis of the disease-causing molecular pathways when designing drugs and prescribing them to patients, the FDA has made clear that it will almost never approve a new drug on the basis of a clinical demonstration that the drug can shut down or repair a pathway. The FDA asserts—correctly—that a drug’s demonstrated effect on a single, disease-specific molecular pathway often fails to predict its ultimate clinical effect on patient health.

But much of the time we already know why, or can find out if we wish to. However precisely targeted it may be, a drug’s overall impact will almost always also depend on how it interacts with other parts of the patient’s body. How the drug is metabolized by the liver, tolerated by the immune system, or interacts with other parts of the patient’s body to cause side effects can affect the drug’s overall performance. Cancer cells and HIV virion mutate rapidly, so the disease itself keeps changing and effective treatment will then require more than one drug to be prescribed to track the changes. Factors like these however, are at least equally likely to undermine predictions made by the FDA-approved label when its contents are based on what was learned in a conventional clinical trial. The only way to work out how most of such factors affect a drug’s performance is by prescribing it to a wide variety of patients and analyzing how differences in patient chemistry affect is safety and efficacy. In a tacit admission of the limits of its own trial protocols, the FDA itself helped launch a nonprofit consortium of drug companies, government agencies, and academic institutions to compile a global database of “rare adverse events” caused by drugs and link them to the genetic factors in the patients involved.

The need to involve doctors and patients in the process of developing precision prescription protocols was also recognized in a 2012 report “on Propelling Innovation In Drug Discovery, Development, and Evaluation” written by the President’s Council of Advisors on Science and Technology (PCAST). “Most trials … imperfectly represent and capture … the full diversity of patients with a disease or the full diversity of treatment results. Integrating clinical trial research into clinical care through innovative trial designs may provide important information about how specific drugs work in specific patients.”

The British government appears to have reached similar conclusion. It recently announced plans to integrate clinical treatment into drug-development efforts on a national scale. As described by life-sciences minister George Freeman,“our hospitals will become more important in the research ecosystem. From being the adopters, purchasers, and users of late-stage drugs, our hospital we see as being a fundamental part of the development process.” Britain’s National Health Service will become “a partner in innovative testing, proving and adopting new drugs and devices in research studies with real patients.” While the details have not yet been made clear, the Times of London reports that “Ministers want to bypass traditional clinical trials by using patients as a ‘test bed’ for promising new drugs, linking [national] health service data to pharmaceutical company records to discover much more quickly how effective treatments are. Firms would be paid different prices depending on how well drugs work for individual patients … Ministers argue that the system of assessing new treatments is no longer up to the job and that the National Institute for Health Care Excellence needs to catch up.”

U.S. oncologists are already engaged in “rapid learning health care,” a term coined in 2007 by a group of health care experts convened by the Institute of Medicine. In brief, the workshop participants proposed a process for continuously improving drug science using data collected by doctors in the course of treating their patients, with a particular focus on groups of patients not usually included in drug-approval clinical trials. By 2008, as discussed in a recently published paper authored by two experts in the field, several major cancer centers had established networks for pooling and analyzing data collected by doctors in their regions. These systems are being used to identify new biomarkers, analyze multidrug therapies, conduct comparative effectiveness studies, recruit patients for clinical trials, and guide treatments. Several commercial vendors now offer precision oncology services. As discussed in the same paper, the powerful analytical tools and protocols now available, or under development, can use data networks to recommend treatments that would “avoid unnecessary replication of either positive or negative experiments … [and] maximize the amount of information obtained from every encounter” and thus allow every treatment to become “a probe that simultaneously treats the patient and provides an opportunity to validate and refine the models on which the treatment decisions are based.” Analytical engines like these take statistical analysis far beyond the one-dimensional correlations traditionally relied on by the FDA in the drug-approval process, and thus lead to far more precise prescription of the drug in question.
The FDA does have in place a regulatory framework—“treatment IND”—that could be used to integrate clinical trial research with clinical care. It was originally developed to provide unapproved drugs to AIDS patients in the early years of medicine’s struggle with HIV. The original plan was that treatment-INDs would be used for more comprehensive investigation. In the late 1980s the National Institute of Allergy and Infectious Diseases (NIAID) began funding “community-based AIDS research”—studies of not-yet-licensed drugs in doctors’ offices, clinics, community hospitals, drug addiction treatment centers, and other primary care settings. The treatment-IND framework remains available to provide investigational drugs to patients for the treatment of serious and life-threatening illnesses for which there are no satisfactory alternative treatments. This is done, however, only when the drug is already under investigation or standard trials have been completed, and the FDA has concluded that enough data has been collected to show that the drug "may be effective" and does not present “unreasonable risks.” The drugs are provided for treatment but doctors also collect safety and side effect data.

More recently, the FDA established a "Group C" treatment IND by agreement with the National Cancer Institute (NCI). The program allows the NCI to distribute investigational drugs to oncologists for the treatment of cancer under protocols different from those underway in under the FDA-approved protocols. Treatment is the primary objective, though here again safety and efficacy data are collected. The FDA usually authorizes Group C treatments only when the drugs have reached Phase 3 of standard clinical trials and have “have shown evidence of relative and reproducible efficacy in a specific tumor type.”

A third FDA-approved initiative has also tiptoed toward integrating clinical trial research into clinical care. Sponsored by the Biomarkers Consortium, a partnership led by the Foundation for the National Institutes of Health (FNIH), which includes representatives of the FDA and the NIH, are investigating up to twelve different breast cancer drugs simultaneously in the I-SPY 2 trial. Patients are initially treated with the drug that targets the pathway that is propelling their cancer, but the trial uses adaptive protocols that allow the doctors involved in the research to use data obtained from patients early in the trial to guide which treatments should be used for patients who enter the trial later. The data are fed into an analytical engine as soon as they are collected, and immediately verified and shared with participants. Drugs may be abandoned if they perform badly and other new drugs may be added. And the sponsor say that this is just a beginning that “holds tremendous promise for many cancers and diseases in addition to breast cancer” and also may lead to adaptive treatments within patients as new, successful drug-patient molecular pairs are identified.

These are steps in the right direction that, as the FDA asserts, will accelerate the drug approval process, reduce its cost, and substantially increase the likelihood that by improving prescription protocols during the trials more drugs will end up being approved. But all of three initiatives continue to require trials that continue long enough to demonstrate clinical efficacy. Even though it is becoming clear that we have the tools to work out disease molecular pathways correctly. Doctors confirm this every time they match a drug’s mechanism of action to a pathway that is known to be active in a patient to successfully prescribe the drug off-label. By refusing to accept evidence that a drug can disrupt a pathway as sufficient evidence that the drug will have desired clinical effects the FDA is, in effect, requiring a demonstration that the pathway does indeed cause the disease. But that can be established independently, and often is, before the drug is designed. New drugs could be approved even faster and at still lower cost if the FDA would accept that body research as sufficient proof that pathway disruption is proof of efficacy.

Experience has also already established that data collected by unblinded doctors during the course of treating their patients can be used to create databases that can be successfully used to guide drug prescriptions going forward. The approval of a targeted drug to treat a specific disease is, in effect approval of the drug’s ability to target a pathway that propels the disease and thus approval of the science that led to the development of the drug by linking the two.

The FDA could and should go further, at least when dealing with new drugs that target serious, life threatening diseases that are currently untreatable.

Following threshold screening for toxicity and an early demonstration in what could be a small clinical trial that the drug can indeed disrupt the pathway that it was designed to target, the drug will, at the sponsor’s request, be made available to selected centers that specialize in treating the disorder in question. The treatment protocols adopted by its doctors will be monitored by independent outsiders, at the FDA itself or designated by the agency, or by perhaps by one of the NIH institutes that sponsors research addressing diseases of that type. The doctors involved in the integration of clinical trial research and clinical treatment will work unblinded and without placebos, and be given broad discretion to adjust treatments, collect data, and analyze responses, as they do. The molecular pathway that propels the disease is important but there are usually other pathways that also interact with the drug to cause side effects or in other ways that affect clinical outcomes, and many of them can’t be identified without prescribing the drug to patients and analyzing why patient responses differ. And if one accepts—as many doctors do—that the biological science has reached the point where it can be trusted to predict clinical benefits on the basis of a drug’s pathway-disrupting effects, doctors will have to start considering whether it is even ethical to conduct blinded placebo-controlled trials of a new drug that has already demonstrated its ability to have those effects to the doctors’ satisfaction. Studies have also established that patients are much more willing to participate in trials if they are assured of being treated with a drug, not a placebo. And this approach will also address the increasingly vocal “right to try” demands from patients suffering from serious diseases and who desperately want immediate access to any drug that might help.

As is standard procedure in conventional trials the doctors will monitor for side effects and the overseeing authority would have the authority to halt use of the drug in response. The doctors will also use any available tools that can track the drug’s effects on the progress of the disease, among them intermediate end points based on what is known about the normal rate of progression of the disease when left untreated. All data from all treatment centers will be pooled and all doctors will have access to the data and the continuously updated analyses of the data and use them to guide prescriptions going forward. If there is no good way to assess the drug’s efficacy other than to continue the trial as long as a conventional trial would continue, and wait for clinical effects to surface or not surface that is what will be done. If doctors are, instead, able to demonstrate that steadily improving prescription protocols are steadily reducing the likelihood that the disease will steadily progress the doctors themselves will take charge of notifying the FDA when, in their view, more patients should be accepted for investigative treatment with the drug by more doctors at more treatment centers. If rate of positive outcomes continues to rise, at some point the FDA, again advised by the doctors who have been treating the patients, could approve the drug for general distribution. But as medical records go digital, the more likely and better approach in the longer term will be to continue to track and analyze how patients respond to the drug indefinitely into the future, and continue refining prescription protocols for as long as the drug remains on the market. New side effects often surface as much as a decade after a drug is approved, and human bodies get reconfigured every time a new child is conceived.

It is worth noting, finally that there are times when relying entirely on a drug’s molecular effects to demonstrate efficacy is indispensable: insisting on the use of clinical endpoints in conventional trials will only ensure that no treatment gets developed and approved. Requiring clinical endpoints means conducting trials that can’t be completed any faster than diseases typically progress to the point where they cause clinical symptoms—and will take even longer than that when preventive drugs are designed to intervene before the diseases start to develop. The trials are very expensive, and the clock of drug patents keeps ticking while trials are conducted. A 2006 article in the New England Journal of Medicine attributed the complete absence of drugs that would prevent, rather than just alleviate, the late-stage symptoms of diseases such as Alzheimer’s or osteoarthritis to a drug approval process that “makes it hard, if not impossible” to move the drug through Washington before its patent expires. “[D]espite considerable advances in our understanding of such diseases, there is no validated and tested path to successful FDA approval of a drug to prevent these conditions. This lack of a clear plan for drug approval adds high regulatory risk to the already high scientific risk of failure.”

Conventional clinical endpoints also present a more fundamental, if rarely noted, problem. Chronic diseases can cause irreversible effects, but when no treatment is available, there is little incentive to diagnose the disease early, so it usually is not diagnosed until clinical effects surface. At that point, a drug may be able to deliver so little clinical improvement to most patients that it is viewed as a failure.

Very rare diseases present another problem: there are often too few patients to conduct a statistically robust double-blind trial, and focusing on molecular scale effects is the only alternative. Moreover, rare hereditary diseases are often strongly and unequivocally linked to specific genetic mutations and the flawed proteins that they code for, and a drug’s ability to block the protein’s pathological effects or a genetic therapy’s ability to replace the mutant gene with a normal one should be accepted as a concomitantly strong demonstration of the therapy’s efficacy. This will be particularly important when dealing with genetic therapies. Because they are genetic, the disorders can start developing very early in life, and to be effective the genetic therapies will have to start equally early. But these disorders are usually slow to develop—if they were very quick killers, the faulty genes probably wouldn’t have lasted in the human gene pool for long. So to meet standard FDA requirements of demonstrated clinical benefits, groups of patients who receive these treatments might have to be monitored for many decades. Few drug companies will be eager to invest in these treatments if that is how long they are likely to have to wait for a return.

As Dr. Janet Woodcock, currently the head of the FDA’s Center for Drug Evaluation and Research, noted over a decade ago, molecular biomarkers “are the foundation of evidence based medicine—who should be treated, how and with what. …. Outcomes happen to people, not populations.” Precision medicine is inherently personal. The treating doctor and the patient are the only ones who have direct access to the information required to prescribe drugs with molecular precision. We will greatly accelerate, improve, and lower the cost of the drug-approval process by relying much more heavily on doctors who specialize in the treatment of complex diseases.

Donate

Are you interested in supporting the Manhattan Institute’s public-interest research and journalism? As a 501(c)(3) nonprofit, donations in support of MI and its scholars’ work are fully tax-deductible as provided by law (EIN #13-2912529).