Will Your New Doctor Be A Robot?

By Chuck Dinerstein, MD, MBA — Oct 15, 2018
Automation has decimated mid-level jobs, and much of the current talk is about machine learning replacing professionals, like doctors and attorneys. An opinion piece brings some useful perspective on how machines learn, as well as on their economic impact.
The Doctor will see you now

Among the many recurring topics, this year has been the impact of machine learning in our lives, especially the implications for our future work life. Prophecies range from utopian ubiquitous machine servants to dystopian ravaging, hollowing out the work and economic standing of the middle and lower classes. What can machine learning do? Workforce implications by Erik Brynjolfsson and Tom Mitchell in Science provides some perspective on machine learning and its future economic impact.

Machine Learning

Machine learning, a form of artificial intelligence, describes “a general-purpose technology” like electricity rather than a specific application. The current state of machine learning is not applicable to the range of human activity; for the foreseeable future machine learning is limited to well-defined tasks. The authors identify specific task as suitable for machine learning; tasks that have entirely described goals and performance measures and that can be taught, which in turn allows the development and application of algorithms. The resulting applications may appear miraculous but have limited scope and are not particularly resilient; small changes in goals or metrics severely degrade their functionality.

Humans learn from reading and by doing. Books capture general knowledge and understanding; this associative learning is problematic for machines. Learning by doing, from examples is another matter. On-the-job training, the old school term for being shown how to do a particular job and receive feedback about whether it is being done correctly, is now “supervised learning” – represents the basics of your job, the computer world names this knowledge, ground-truth. Machine learning mimics and accelerates supervised learning using ground-truth datasets organized into thousands of input and output pairs. The input is the job to be performed; the output is the known correct result. Machines’ faster computational speeds, less fallible memory and unwavering focus learn the ground truth quickly.

Limitations

There are significant limits on this learning. Machine learning is perhaps our best example of teaching to the test. Machine learning only applies to the ground-truth it has been shown; machines cannot generalize their knowledge as humans do. Consider the applications of machine learning to medicine. Most applications are confined to images and simple diagnostics where problem and solution are well defined, the information is readily digitized for machines, and thousands of examples are available without patient involvement. Machine learning has been demonstrated to accurately identify retinal changes in diabetes, the mammographic appearance of breast cancer, and electrocardiogram patterns consistent with abnormal heart rhythms. These problems exhibit a yes or no, right or wrong clarity. Within these parameters, machine learning will equal or exceed the abilities of their human trainers.

The algorithm trained to detect breast cancer cannot identify changes in the eye from diabetes. When the scope and its’ ground truth changes, machine learning is lost, it cannot generalize to different situations. Machine learning reflects ‘the truth’ of its training data set, and is our faithful mimic. Machine learning is dependent upon the variables we provide, the unintended bias that those choices may reflect and the fact that the problem under consideration changes. Consider three examples drawn from medicine

  • Clinical evidence indicated that excess acid was the underlying cause of stomach ulcers. It required several decades to discover a new variable, H. pylori, a bacteria that ultimately was found to be stomach ulcer’s truer cause.
  • Clinical studies of heart disease involved men primarily for several decades. We mistakenly assumed that men and women respond identically. Only in the last few years have we seen that this unintended bias is incorrect and that women often report many different symptoms than men when experiencing heart disease.
  • Lung cancer in the early 20th century was so rare that physicians gathered to see examples. Within 50 years it was the most common cause of cancer deaths.

Machines might have applied the knowledge more consistently, but it would not have found the association with H. pylori or recognized that women were under-represented or that lung cancer was to become our most significant cause of cancer. Humans made those findings.

Perhaps the greatest weakness, the Achilles heel, in these systems are their explanatory capabilities. They adjust thousands of numerical weighting to come to their probabilistic, answers given to us as percentages of certainty. Machine learning’s thought chain, all those intermediary steps are unknown or unclear to humans who have to act upon their recommendations. We do not know either the discriminators or the weightings being used; only the output expressed statistically; we do not understand how they derive their conclusions. Certain decisions require an explanation, a high tech Magic Eight Ball is insufficient in treating cancer.

Workforce Implications

Think of our work as a series of tasks. Some of our work is explicit, easily described, amenable to machine learning. Another portion of our work consists of activity that, “we know more than we can tell” – some aspects of our work are implicit, shareable in the sense of “see one, do one,” but not amenable to description. Our implicit knowledge is the ‘gut feeling’ the intuition and nuance. Machine learning cannot replace that experience because we cannot express it in a training data set.

Our work is a variable combination of explicit tasks that will readily succumb to the advances in machine learning and implicit tasks that only humans can perform. Machine learning will disrupt work based upon the balance of these tasks, making its impact far from simple. As explicit labor’s value diminishes, implicit labor’s value will rise.

Machines being tireless and cheap, proved a good substitute for highly structured repetitive work, jobs in the middle-skill range. Automation replaced those jobs and employment in those sectors fell. Implicit tasks are found on both ends of the work spectrum, from home health aides to physicians. As analytics are increasingly managed by machines – their implicit work becomes more valuable. Once we see this shift in value, from explicit knowledge to implicit knowledge, the old economic rules apply. Labor demand will fall for work that can be readily substituted by machines and labor will move to fill complimentary work, to assist or be assisted by machines. When many individuals can readily acquire implicit skills wages will decrease, there is a more than adequate supply of workers. For implicit skills and knowledge that is more difficult to achieve, labor will be less abundant, and wages may rise.

Another economic factor, captured in the term complementarity – the value derived from relationships, will also moderate the impact of machine learning. The value of a physician is not solely related to their knowledge, but to their hospital affiliations and to the physicians and services they utilize. Physicians provide greater added value based upon their relationships, their complementarity. In the age of machines, complementarity which cannot be easily learned by machines will increase labor’s value. 

Category

Chuck Dinerstein, MD, MBA

Director of Medicine

Dr. Charles Dinerstein, M.D., MBA, FACS is Director of Medicine at the American Council on Science and Health. He has over 25 years of experience as a vascular surgeon.

Recent articles by this author: