Reading Time: 3 minutes

A.I. is becoming more and more ubiquitous, from recommending the (sometimes only) news we read to assessing our risk and policy specifics for insurance to deciding whether or not we qualify for a mortgage. With such power implicated in A.I. comes the need for oversight, a need for human decision makers in the loop to override A.I. where bias creeps in.

In January of this year, the Montreal A.I. Ethics Institute published a huge (188 page) report on the state of ethics in and around A.I. The intro reads:

The problems at the forefront of AI Ethics today – injustice, discrimination and retaliation – are battles that marginalized communities have been fighting for decades. It only took us millions of dollars and immense public interest into our darling technology to notice. Algorithms manifest and further exacerbate the structural inequalities in our society. We’re finally starting to see ‘bias’ – algorithmic or otherwise – for what it really is: a fundamentally human problem.

There are many ways in which bias can enter a system and cause harm. Imagine, for example, if SIRI were only trained on voices from those with southern-American accents. Would it understand voices from other parts of the country or world, or would it have a higher error rate? Would “Call 9-1-1!” still work for a Nigerian man in trouble, whose second language is English, shouting into his phone at 3 AM? Or how about an algorithm created to detect patient response to a particular drug, trained only on a single race or age group. Would that algorithm generalize well enough to be used broadly?

Generally, humans are quite good at identifying bias in these and other situations, where A.I. is not yet self-aware of its own shortcomings (pardon the anthropomorphism). And with A.I.-driven models growing in number and responsibility, it’s more important than ever to look at the ethics of such models.

The report by the Montreal A.I. Ethics Institute spotlights the impact of A.I. on our youngest generation, worldwide. They are strongly impacted by A.I. and are easily made victims of algorithmic discrimination, racial injustice, bullying, and misinformation.

I highly encourage you take a moment to read through this important report.

To underscore its importance, the first section subheading is: “Disbelieving, devaluing, and discrediting the contributions of Black women has been the historical norm. Let’s write a new playbook for AI Ethics.”

Have a good week!


Of Interest

Covid-19 Immunity Likely Lasts for Years
A new study shows immune cells primed to fight the coronavirus should persist for a long time after someone is vaccinated or recovers from infection. The result is an encouraging sign that could mean that immunity to the virus likely lasts for many years, potentially alleviating fears that the covid-19 vaccine would require repeated booster shots to protect ourselves against the disease and finally get the pandemic under control. Read more here.

New ML Theory Questions the Very Nature of Science
A novel computer algorithm, or set of rules, that accurately predicts the orbits of planets in the solar system could be adapted to better predict and control the behavior of the plasma that fuels fusion facilities designed to harvest on Earth the fusion energy that powers the sun and stars. The algorithm, devised by a scientist at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning to develop predictions that raise questions about the very nature of science. Read more here.

U.K. Spy Agency Turns to Artificial Intelligence
UK intelligence agency GCHQ intends to use artificial intelligence to tackle issues from child sexual abuse to disinformation and human trafficking. The agency has published a paper, Ethics of AI: Pioneering a New National Security, saying the technology will be put at the heart of its operations and officials say it will help analysts spot patterns hidden inside large – and fast growing – amounts of data. Read more here.