Last week I discussed how tricky it can be to understand a true diagnosis versus a false one. In this week’s Tech Tuesday, I want to look at a company called Galleri (no affiliation with BDS) that claims to detect cancers and increase patient survival.
Galleri and A.I. Cancer Detection
Here’s a statement from Galleri’s parent company’s website:
The earlier that cancer can be found, the higher the chance of successful treatment and survival. Yet, too often cancer goes undetected until it has progressed to an advanced stage.
And on the Galleri website, they mention that:
The Galleri test uses next-generation sequencing (NGS) and machine-learning algorithms to analyze methylation patterns of cell-free DNA (cfDNA) in the bloodstream.
Through a single blood draw, they screen people for over 50 types of cancer. The results, if positive, identify the origin of the cancer signal to aid in diagnosis. All this in about ten days from the time of submission of a blood sample.
Does this sound scary yet?
The Pros and Cons
The disclaimers point out that their tests should not replace regular tests from a physician and that a false result does not mean, “no cancer” and even mentions false positives as a possibility, where one could receive a test saying they have cancer when they actually do not.
The pros and cons of this test flood my mind: Imagine the turmoil of a false positive. Imagine finding cancer early and getting early treatment to knock it out. Imagine missing a type of cancer that the Galleri tests claims to screen for. What about living with the knowledge of having cancer that cannot be treated until it has progressed?
This sort of thinking reminds me of the choices that self-driving cars will need to make when faced with a multiple-outcome certain fatal accident. A scenario where a crash will inevitably result in injury or death, but where there are multiple options for whom to avoid or hit. Which person(s) should the car save? A programmer will ultimately have had to make that decision.
These are very complex topics and ones that deserve a lot of consideration and, dare I say, regulation.
How to go About This?
As more A.I. systems are designed to help with our health or wellbeing, their operators will have to make tough decisions. It’s up to us, the public, to hold these companies accountable for their actions and to make sure they operate under complete transparency.
I believe it’s good to have a healthy dose of skepticism and patience when it comes to health and A.I. And I believe that it will long be necessary to have a human in the loop to make many of the important final decisions.
What do you say?
New Insight Into U.S. Air Pollution with Data Science
Francesca Dominici, a professor of biostatistics at the Harvard T.H. Chan School of Public Health and co-director of the Harvard Data Science Initiative, illuminates the interplay between air pollution, environmental injustice, and Covid-19. This article addresses her findings on the effects that air pollution levels have on human health in the United States.
All the Recommenders you Could Ever Want, in one Place
This repository contains examples and best practices for building recommendation systems, provided as Jupyter notebooks. The examples detail the learnings across five key tasks: (1) preparing data, (2) Model building, (3) Evaluating results, (4) Model selection and optimization, and (5) Deployment (on Azure, of course).
Artificial Intelligence Hall of Shame
Here’s a list of incidents that caused, or nearly caused, harm that aims to prompt developers to think more carefully about the tech they create. The roll of dishonor was started by Sean McGregor, who works as a machine learning engineer at voice processor startup Syntiant. One of his favorite incidents is an A.I. blooper by a face-recognition-powered jaywalking-detection system in Ningbo, China, which incorrectly accused a woman whose face appeared in an ad on the side of a bus.