Saturday, April 20, 2024

In the future, A.I. medicine will let patients own their health data

Share

Nvidia

A.I. has the power to transform the world — at least that’s what we’re constantly being told. Yes, it powers voice assistants and robotic dogs, but there are some legitimate areas where A.I. is not only making things easier and more convenient. In the case of medicine and health care, it’s actually saving lives.

There has been pushback lately, though. Medical professionals and government officials are bullish about the long-term potential of artificial intelligence’s transformative powers, but researchers are taking a more cautious and measured approach to implementation. In just the past year, we’ve seen huge leaps forward that take A.I.’s potential in medical care and turn it into a reality.

Today, we stand on the brink of a significant transformation in how we’ll all experience and use our medical data in the future.

A.I. in a broken system

“We became serious about it as a discipline maybe five years ago, but my whole career I’ve been haunted by the need for this technology,” Dr. Richard White told Digital Trends about the institution’s foray into A.I.. He’s the chair of radiology at Ohio State University’s Wexner Medical Center

“It’s up to the patient and the doctors to try to fix it, because we are the agents of the last resort.”

“For the longest time, I could not figure out why there wasn’t a use for computers to replicate what humans are doing – to laboriously look through all the images that were dynamic and try to come up with this, and then have the computer make the same mistakes that I was making was very frustrating for at least three decades.”

White said that when they tried to venture into radiomics, they saw a true need for computer smarts. “About four or five years ago, things were coming together and it was the right thing to do. It was meeting a dire need, and that’s when we started serious [with A.I.] in our labs.”

Radiologists from participating health systems at GTC this year, including White, Dr. Paul Chang, a professor and vice chairman from the University of Chicago, and Dr. Christopher Hess, a professor and chair of radiology from the University of California, San Francisco (UCSF), began exploring A.I. simply because the amount of medical data from improved imaging scans became overwhelming.

Advances to medical imaging technology resulted in the collection of significantly more patient data, Chang and his colleagues said, which led to doctor burnout. Doctors see A.I.’s transformative potential, as the technology could allow them to regain some of the time spent on laboriously going through scans, and this, according to Dr. Hess, allows “doctors to become healers again.”

But Chang cautions his fellow practitioners from being “seduced” by the new technology, noting that it must be correctly implemented to be effective. “You can’t prematurely incorporate A.I. into a system that’s broken,” he said.

In many ways, it’s that exact scenario that has led us to where we are today.

Owning your own data

The current practice of medicine right now is centered around algorithms and electronic health records. This software isn’t centered on patient care or learning, but it’s a system of categorizing treatments, which in turn allows insurers to pay doctors for services that were performed.

“The industry has transformed doctors into clients to put in codes so that they can be billed,” Dr. Walter Brouwer, CEO of data analytics firm Doc.A.I. said. “We have to stop what we’re doing because it doesn’t work. If you take 2019, the predictions are that 400 doctors will commit suicide, 150,000 people will die, and the first course of bankruptcy will be medical records, so we trust that everyone will try to fix a system that’s unfixable. It’s up to the patient and the doctors to try to fix it, because we are the agents of the last resort.”

People can actually monetize their data as a latent economic asset. That’s the promise of deep learning.

For White, changing how data flows through the system is an important first step to being able to truly leverage the power of A.I.. Unlike other fields where A.I. has largely been seen as successful technology enablers, such as customer service and autonomous driving, the healthcare vertical has been saddled with regulations designed to protect patient privacy rights.

“I think the patient has to be entrusted with their own data, and then they direct how that data gets used when we’re brought into their lives,” he said. “It is our moral obligation to protect it.”

For Anthem, the nation’s second provider of health insurance covering more than 40 million Americans, if sharing data is more convenient, patients would feel more compelled to do it.

Doc.ai users use the app to choose which data trials to join, and which aspects of their health data to share. doc.ai

“It’s really a tradeoff of convenience and privacy,” said Rajeev Ronanki, Anthem’s chief digital officer. “So far, we haven’t done a good job of making healthcare simple, easy, and convenient, so therefore everyone wants to value privacy over everything else. For example, if it is going to save you fifteen minutes from trying to fill out the same redundant forms in your doctor’s office about your health conditions and you can get in and out quicker, then most people will choose convenience over wanting to make their data private. Surely, some people will choose to keep their health information private, and we want to be able to support both.”

As mobile devices become more powerful, healthcare professionals envision a world where the patients own and store the data on their devices, leaving health institutions responsible to create a system where the data can be anonymized, shared, and exchanged.

“Getting your hands on good data is a very big challenge.”

“No institution is going to allow large amounts of data to be sent from their systems, so we have to bring the models and develop the model, by circulating them to the subscribers and then watching the arrangement, “White said. “It’s just much more practical.”

A larger pool of data shared by patients could lead to more accurate clinical studies and reduce bias in medicine. In this model, researchers want to rely on edge learning rather than the cloud to process the data. Instead of setting information to the cloud, edge learning relies on the Apple model for A.I. where data is stored and processed locally, promising a higher degree of privacy. And because data is processed locally, it can be processed much faster, De Brouwer claimed.

“So I collect all my data – my healthcare records – if I want to do a clinical trial,” De Brouwer continued. “If I am given a protocol, I trace my data through the protocols on my phone. I get tensors. I send off the tensors, which are irreversible, and they are averaged with all the other data, and I get back the data on my phone. My data is private, but I get a better prediction because tensors are the average of the average of the average of the average, which is better than the first average.”

[youtube https://www.youtube.com/watch?v=b9z-HO7K7z0?feature=oembed&w=100&h=100]

De Brouwer claimed this would completely change medical research. “We can actually combine our tensors and leave our data where it is. People can actually monetize their data as a latent economic asset. That’s the promise of deep learning.”

With technology enablers, like 5G, connected home sensors, and smart health devices, medical researchers may soon have access to new data sources that they may not have considered as relevant for their medical research today.

Called fuzzy data, Doc.A.I. predicts that the amount of data will grow by as much as 32 times each year, and by 2020, we’re going to be headed to a factorial future. “A.I. is here to help because it brings us the gift of time,” De Brouwer said. “I am very optimistic about the future.”

Reducing bias

As part of its initiative for the responsible and ethical use of A.I., Anthem is now working with data scientists to evaluate 17 million records from its databases to ensure that there aren’t any biases in the algorithms that it has created.

[youtube https://www.youtube.com/watch?v=fvFO6uQvrfI?feature=oembed&w=100&h=100]

“When you create algorithms that impact people’s lives, then you have to be a lot more careful,” said Democratic Congressman Jerry McNerney, who is co-chair of the Congressional A.I. Caucus, said in a separate talk at GTC, emphasizing some of the life and death consequences when A.I. is used in critical infrastructure such as military applications. “When you have data that’s badly biased, then you are going to have similar results. Getting your hands on good data is a very big challenge.”

Additionally, when you have limited data, bias can also creep in more easily, Hess explained, is that it can skew medical studies and interpretations of results. Citing Stanford University’s research showcasing how A.I.-derived algorithms are “better” at detecting pneumonia than actual radiologists, Hess showed some of the fallacies in the presumption.

While A.I. is good at repetitive, time-consuming tasks you still need the human interaction in patient care.

“What is better,” asked a facetious Hess trying to extract a definition of the word better. While Hess admitted that Stanford’s algorithms had a high success rate – upwards of 75 percent – at detecting pneumonia by reading X-rays and other scans, it still underperformed when compared against the diagnoses made by four radiologists cited in the study.

Though Hess views A.I. as a time-saving technology that allows physicians to go back to patient care rather than spend time on coding charts, he warns that the technology isn’t quite perfect, noting that A.I.’s object detection algorithms can completely misidentify scans.

Medical A.I. as a drone

As such, Hess and his colleagues view A.I. as a complementary technology in medicine that will help, not replace, human doctors. While A.I. is good at repetitive, time-consuming tasks of identifying tumors and abnormalities in scans, Chang said, you still need the human interaction in patient care.

Rather, to interpret the massive troves of data that will be collected, industry observers predict that a single doctor will create numerous additional jobs for data scientists to create algorithms to help make sense of that data. “We’re going to have the same in medicine. I think that every doctor will create a hundred data scientist jobs, so healthcare will become a continuous function,” De Brouwer said.

“We will always need caring people to interface with a human being, human-to-human,” White said. “I hope we never lose the touch of a hand on another person’s hand asking for help, and someone has to translate it to real-world situations.”

Editors’ Recommendations

  • This smart pill will tattle to your doctor if you don’t take your meds
  • A.I. analyzes video to detect signs of cerebral palsy in infants
  • Face-scanning A.I. can help doctors spot unusual genetic disorders
  • Google removes its own data-collection app after it got past Apple
  • 5G will herald a new invention age, Qualcomm says at Mobile World Congress







Read more

More News