Artificial Intelligence, Pathology, and Driverless Cars: Who’s Behind the Wheel?

The topic of artificial intelligence (AI) is hard to get away from. Did you know that you will soon be able get a ride regularly in a driverless car? My daughter just took a position as a marketing manager for a driverless car company which will launch the service soon in San Francisco. To prepare for her interview, she asked me what I thought of driverless cars. I shared that I liked the privacy – i.e, no potentially creepy driver to worry about. But on the flip side, I had not heard much about whether driverless cars were safe. What would the car do if another vehicle ran a red light and crossed its path? Was there some sort of validation and approval process by an authoritative external agency, analogous to medical devices and the FDA? Was there an option to abort the ride if I didn’t like what was happening??

The common themes from her conversations with family and friends were about safety and control. I think these are related in many ways. Feeling comfortable with who is in control makes one feel safe. Unfamiliarity with who or what is behind the wheel feels unsafe, and might make users less likely to quickly adopt a driverless car.

In health and medicine, there is lots of excitement about AI – but will there be similar worries among users about “driverless” healthcare?? UC Davis’ first AI symposium last month had ~200 participants and was clearly a big success. Our own faculty participated in three platform presentations, 7 posters, and a panel discussion, so we are clearly out there in this emerging field. But is our own enthusiasm enough to get patients and providers to adopt the AI tools we are creating?

Like the driverless car, we need to engender trust about what is behind the wheel for AI in healthcare so that those we serve feel safe about acting on the results provided. Here’s what I heard at our AI symposium that shows we are headed in the right direction:

  • We have created talented multi-disciplinary teams from many UC Davis departments, school, and colleges, as well as industry partners to build our AI tools. The perspective of many helps to address biases and blind spots, and ensures a better, safer product.
  • We are considering unique pitfalls to this field.  Dr. Hooman Rashidi shared how the phenomenon of “over-fitting” (also called “over-training”) was observed in his work developing an image-based AI cell phone app for diagnosis in hematopathology and surgical pathology – this was a surprise to him and other experts. Like Hooman, we need to be open to paradoxical findings and not discount them.
  • We anticipate unintended consequences.  Dr. Richard Levenson raised the possibility of human “de-skilling” – i.e, we humans lose the ability to do tasks that AI tools take over, like making diagnoses or treatment decisions. And since AI can train itself, Richard pointed out that this could create a “closed loop” of AI interacting with AI, and shutting out humans. Humans are uniquely creative in ways that machines can never be. Society may therefore lose the ability to discover new concepts or make improvements, if humans don’t stay skilled and involved.

One important topic not addressed in our symposium was morality and its role in AI — this was recently raised in an intriguing e-pub ahead of press in Nature (1). Using driverless cars as their example, the authors point out that moral decisions need to be planned for and programmed. They pose the dilemma that if an accident cannot be avoided and there is no safe trajectory, does the car swerve to hit one jay-walking teenager to spare its three elderly passengers? How do developers “divide up risk of harm between different stakeholders on the road”? The authors also demonstrate that different regions of the world have different moral values based on their culture – some place a higher value on women, the young, or those with more wealth. Should AI tools reflect regional values and morals(1)?

Perhaps the moral dilemmas of AI are best addressed by the messages shared at the UCD AI symposium from our medical school alum Keisuke Nakagawa MD, co-founder of WhiteKoat(2). He noted that AI improves the quality of data, and therefore human judgment and action (i.e, the human touch) improves, too. I am optimistic that this means that we will be able to improve our ability to make moral judgments, too. Physicians and healthcare providers provide the human touch which is inevitably necessary to guide AI development, including its morals. Our valuable human contribution will, in my opinion, prevent driverless – and soul-less — healthcare.

Whether we realize it or not, our jobs are in many ways a lot like my daughter’s. Her role is to communicate safety and trust so that the public will adopt the services of driverless cars. As physicians, laboratorians, and scientists, we regularly need to explain our technology and findings to address concerns of control and safety – and yes, concerns about moral decisions too – to our external audiences, i.e. our clients, colleagues, and patients. Our tests, tools, and technology are never perfect, and this has always been a challenging role. Our challenges are growing, but that’s what makes our field interesting and fun.

References
1) Awad E, Dsouza S, Kim R, Shulz J, Henrich J, Shariff A, Bonnefan J-F, Rahwan I. The moral machine experiment. Nature. 2018 Oct 24. doi: 10.1038/s41586-018-0637-6. [Epub ahead of print] https://www.nature.com/articles/s41586-018-0637-6
2) https://www.whitekoat.org/

By | 2018-10-29T15:43:02+00:00 November 1, 2018|2 Comments

2 Comments

  1. Angela Haczku November 4, 2018 at 8:47 am - Reply

    Lydia, excellent thoughts! Much agree.

  2. ernest nitka November 1, 2018 at 1:51 pm - Reply

    I sometimes think we already have driverless healthcare – we’re all providers and our skill level is never taken into account. PA, MA, NP, MD. Just the rant of an almost retired MD -ernie

Leave A Comment