The Future Of AI In Healthcare AI & Surroundings

Two AI luminaries, Fei-Fei Li and Andrew Ng got together today on YouTube, to discuss the state of AI in healthcare. Covid-19 has made healthcare a top priority for governments, businesses, and investors around the world and accelerated efforts to apply artificial intelligence to improving our health, from drug discovery to more efficient hospital operations to better diagnostics.

The first quarter of 2021 saw a new funding record with nearly $2.5 billion raised by startups focusing on AI in healthcare, according to CB Insights. But this could be similar to the excitement around and investment in autonomous vehicle technologies a few years ago, as successful implementations of AI-based products and services in healthcare may not be just around the corner.

While both Li and Ng are currently applying AI to healthcare challenges, they believe that in the next few years they and their colleagues will still be in the experimentation stage. Progress will be “much slower than we wish over the next few years,” says Ng. “We are still figuring out the path to a human win,” agrees Li. For her, taking a “human-centered approach” is key to advancing the state of the art of AI in healthcare. She encourages her students to shadow clinicians in the hospital, “to see the human side,” to understand better both patients and the people taking care of them as the key to successful adoption of AI-based solutions. This is a unique challenge in the healthcare sector, according to Li, stressing the importance of the non-digitized aspect of healthcare, the human factor. “We have almost zero data on human behavior,” she says.

In addition, Ng advocates shifting AI development from being model-centric to being data-centric. This includes improving the quality of the data used to train AI programs and building the tools and processes required to put data at the center of developers’ work.

The quality, privacy, and availability of data has, of course, its unique challenges in healthcare settings. Ng points out that quality-of-data standards are still ambiguous and, as a result, AI developers need to brainstorm all the things that can go wrong and analyze the data accordingly. Li thinks that the most important thing is to recognize human responsibility. “AI is biased” is a term that puts the responsibility on the machine rather than the people who collect and manage the data. For Li, putting guard rails against potential bias and ensuring data integrity is a first step in the design process.


In answering the question “What are the healthcare problems that are yet to be solved?” Ng mentions mental health, diagnostics, and the operational side of healthcare. Li cites the 250,00 people that die in the U.S. annually due to medical error. AI can help in ensuring that medical procedures are carried out correctly, and that chronic patients are cared for, at home or in the clinic, in a timely fashion. “This is what ambient intelligence is about,” says Li, to serve as physician- and nurse-assistants, to catch errors before they occur.

The observations made by Ng and Li are supported by recent surveys and studies, all pointing to the nascent state of AI in healthcare:

·       90% of U.S. hospitals have an AI/automation strategy in place, up from 53% percent in the third quarter of 2019. But only 7% of hospitals’ AI strategies are fully operational, according to Sage Growth Partners;

·       The number of approved AI/ML-based medical devices has increased substantially since 2015, but currently, “there is no specific regulatory pathway for AI/ML-based medical devices in the USA or Europe,” concluded a study publish in The Lancet;

·       Despite a $27 billion in federally funded incentive programs to encourage hospitals and providers to adopt Electronic Health Records, there is no standard format or centralized repository of patient medical data. “The Covid-19 pandemic has underscored this issue,” observes a CB Insights report;

·       Physicians are susceptible to incorrect advice, whether the source is an AI system or other humans. “For high-risk settings like diagnostic decision making, such over-reliance on advice can be dangerous,” concludes an MIT study.

But, as Fei-Fei Li says, a barrier to adoption is also an opportunity. Both Li and Andrew Ng expect a tipping point in the future, when a big success story will be rapidly replicated and encourage healthcare providers—and patients—to embrace healthcare AI.

social experiment by Livio Acerbo #greengroundit #thisisnotapost #thisisart