Scott Gottlieb's FDA Revamps Regulations on Medical Software

By Chuck Dinerstein, MD, MBA — Dec 11, 2017
As part of regulatory reform, and given the increasing use of computer algorithms in patient care, the Food and Drug Administration released draft guidelines for software use that aids both doctors and their patients.    
Healing device By The U.S. Food and Drug Administration

The value of computer-driven medical devices in capturing data is ubiquitous, but the real power of these systems will and do come from how recommendations that support the decision making of physicians and patients. Last week Scott Gottlieb and the Food and Drug Administration (FDA) released a statement on their proposed guidance for medical devices whose software provides clinical and patient decision support (CDS). Given that 80% of physicians use medical apps on their smartphones and that the number of health-related apps, as of 2015 is well past 165,000 this is an area that the FDA must address.

The draft guidance that will be modified after stakeholder comments, categorized software into three categories 

  • Software actively regulated as medical devices by the FDA
  • Low-risk decision support like calculators of body mass index (BMI) that by their regulatory discretion, they will choose to ignore 
  • Software outside of their interest and jurisdiction.

Four criteria must be met to avoid FDA scrutiny as a medical device. The device must be intended to 1) display, analyze or print medical information and designed to 2) provide recommendations about prevention, diagnosis or treatment. The remaining are the essential criteria and are worth a more in-depth look.

First, devices and software that “acquire, process or analyze” medical images or signals are automatically medical devices requiring FDA approval. So, CT scanners, EKGs, pulse oximeters (devices that non-invasively measure your level of oxygenation) all require approval. The FitBits of the world that count steps and stairs do not. Alternatively, the app Cardiogram being used by the University of California, San Francisco to study heart health and arrhythmias is in a grey area, it can analyze heart rhythms, but that data is currently sent to researchers and not shared with users. That could change, with a simple software upgrade, if they were to seek and get FDA approval.

Second, in making recommendations, the software must make their logic explicitly apparent to the physician users, allowing them “to independently review the basis for such recommendations” and not “rely primarily on any of such recommendations” for diagnosis or treatment. This provision has two immediate consequences. First, it maintains the role of the physician in oversight and application of recommendations. There is no doctorless healthcare in our immediate medical software future. That is good news for physicians facing increasing loss of their medical authority, their ‘scope of privileges,’ to mid-level providers and ancillary healthcare staff. But clinical liability also resides with the physician users. This focus on physician-user responsibility is consistent with the history of automated systems where developers and manufacturers were exempt from accountability in the use of their products unless malfunction was demonstrated. [1]

Another consequence flowing from this requirement for an explicit explanation of recommendations is the impact on start-ups developing these applications. Current software under development make use of deep-learning, computers shown thousands of examples, told correct from incorrect and then the program makes its own decisions. The scientific literature and mainstream media feature many of these systems, reading mammograms to diagnose cancer, looking at our retinas to detect changes from diabetes, or even in the case of IBM’s Watson [2] aiding in diagnosis and treatment.

The use of machine learning has brought great success, beating humans at chess and Go, but even the systems’ designers cannot explain what information the computer used in making their choices. As British science fiction author and futurist Arthur C. Clarke stated, "Any sufficiently advanced technology is indistinguishable from magic" - the FDA is not in the magic business. Systems that cannot explain the rationale for their recommendations will be considered medical devices subject to FDA review. It is difficult to hold physicians liable for acting or not acting upon recommendations of CDS where the rationale is not apparent. This maintains the subservient relationship of tool to its human masters; a necessary precondition if liability is to fall to the master.

If the algorithm, with an unknown chain of reasoning, were an equal to the physician, then it would perhaps be considered equally, legally culpable; ‘the algorithm made me do it’ would be a legal defense. The FDA’s guidelines go further, it is not just ‘black box’ algorithms that will come under scrutiny; recommendations must be based upon data readily understood and publicly available to physicians – no proprietary algorithms or datasets.

Devices that make recommendations based upon guidelines, generated by your health system locally or nationally/globally by academic clinical societies, would not require FDA clearance. But this raises practical concerns, not addressed by the FDA. Who judges which guidelines to use when guidelines conflict, which is often the case? The recent American Heart Association guidelines on hypertension make recommendations different than guidelines from just a few years ago. Can an institution choose which to follow? Who makes these decisions, administration or medical staff? There are again liability concerns as guidelines change over time, how are the prior decisions archived? Who tracks and archives the guidelines and systems in place when a medical decision made five years earlier IS subject to malpractice review?

The FDA also drafted similar criteria for software recommendations used primarily by patients. The requirements are the same, but recognizing the greater knowledge and understanding of physicians the FDA limits recommendations that do not require review. Systems that remind patients to take their medicines or that recommend an over-the-counter medication for colds or allergies are not medical devices subject to scrutiny. But a device that supports changing doses, timing or cessation of prescribed medicine is a regulated medical device. This has implications because this type of software can personalize and improve care for patients with diabetes taking insulin or patients taking warfarin as a blood thinner for stroke prevention and other reasons.   

[1] The legal history of aircraft ‘autopilots’ is a good example of how liability is viewed by the courts. There are cases where the pilot did not engage the autopilot and was found liable and where the pilot did engage the systems and was again found liable. The human ability to take action or not, our agency, is a big factor before the court.

[2] Watson is IBM’s natural language computer and interface that was able to beat all the human champions in Jeopardy

 

 
 

Chuck Dinerstein, MD, MBA

Director of Medicine

Dr. Charles Dinerstein, M.D., MBA, FACS is Director of Medicine at the American Council on Science and Health. He has over 25 years of experience as a vascular surgeon.

Recent articles by this author: