Artificial Intelligence is Missing the Effect of Affect

AI entrepreneurs are making lots of claims about how artificial intelligence can solve the problems with sickcare, like poor quality, high costs, terrible experiences for the patients and healthcare professionals, inequitable access and mountains of administrivia burning out caregivers and medical care teams.

Fundamentally, doctors do three things: 1) they make decisions , 2) they perform procedures and, 3) they communicate with and comfort the sick and their support networks. A triple threat doctor, while the goal, is sometimes hard to find. Being a triple threat academic is a particularly heavy lift and is outdated.

Besides that, sickcare professional productivity is lagging.

To fill those gaps, healthcare artificial intelligence seems to be getting the most traction helping make decisions, whether it be using computer vision techniques to improve the sensitivity and specificity of those interpreting images or patterns (radiologists reading images, dermatologist looking at skin images to diagnose skin cancer, or ophthalmologists looking at retinal scans to detect diseases like diabetic retinopathy) or creating clinical decision support (CDS) algorithms to better diagnose and predict diseases.

Robotic process automation (RPA) helps to automate repetitive tasks. Still, despite its maturing process and growing number of use cases — which include updating electronic medical records, simplifying claims processing and managing staffing levels — RPA isn’t yet widespread in healthcare settings. Surgical robotics is progressing but comes with its own problems.

Finally, helping those cold fishes with lousy bedside or webside manners is the next , but neglected, AI frontier.

Emotion AI (affective computing) has been less widely applied in healthcare, as can be seen in a recent literature review. In a meticulous analysis of 156 papers on AI in pregnancy health, a team in Spain found only two papers in which emotions were used as inputs. Their review, published in the journal IEEE Access, concluded that expanded use of affective computing could help improve health outcomes for pregnant women and their infants.

But, if you don't know what you or someone else is feeling or what caused those feelings. how is a data scientists supposed to make sense of those feelings to train an algorithm? Look for :

  1. Brain implants that will allow you to read someone's amydala (feeling place), not just their neocortex (thinking place)
  2. Objective measures of feelings i.e remote feelometers to detect pain, pleasure, anger or depression. How about a transcutaneous monitor that measures dopamine?
  3. Early warning systems when you are about to get so angry that you won't be able to think straight
  4. Mindfulness trackers that tells you when you are thinking about past life events that make you anxious, sad or depressed, or makes you worry too much about your future over which you have no control
  5. Trigger alerts on your mobiles device that tells you or predicts when you are about to encounter a situation that drives you crazy . After all, like the Stoics said, anger is temporary madness
  6. Burnout thermometers that can tell when you are cooked but not done.
  7. Tracking thoughts, feelings and actions that reveal entrepreneurial psychopathology
  8. Creating glossaries of entrepreneurial syndromes
  9. Bots that help you with the loneliness of the long distance entrepreneur
  10. A platform that helps you develop and sustain entrepreneurial habits

AI is missing the effect of affect. Look for that capability in a cyberbrain coming soon to a cloud near you.

But, be careful what you wish for. People are messy because they are emotional beings. Do we really want the same from machines?

Digital Health Entrepreneurship is available on Amazon and Springer.

Arlen Meyers, MD, MBA is the President and CEO of the Society of Physician Entrepreneurs.

Share this article

Leave your comments

Post comment as a guest

  • Lee Hughes

    There’s a chance AI could run amok.

  • David Langfield

    Rarely has there been a technology with more potential to benefit healthcare. But can we trust AI?

  • Ross Spencer

    In reply to: David Langfield

    Just as it is in people, trust in AI systems will have to be earned over time.

  • Danny Harris

    This is the sad reality....

  • Joe McKenzie

    Robots could one day be more intelligent than people and takeover.