During the COVID-19 pandemic, artificial intelligence (AI) has become a trusted ally and partner in our daily lives. While there are countless benefits of AI, embedded bias could be determining who keeps their job, what news we see – even who lives or dies – without our even knowing it.
AI-based technology is enabling us to stay connected to our communities, order essential supplies and perform our jobs while adhering to stay-at-home orders. It ensures we’re entertained by using algorithms to compare our past Netflix viewings to recommend our next binge watch. AI even enables robots to answer the call for contactless food delivery to our homes and deliver personal protective equipment (PPE) to hospitals without exposing supply workers to the virus.
These developments have saved and enhanced lives during the pandemic. However, even in the best of times, the sharpest minds at the most sophisticated companies struggle to ensure their use of AI is neither discriminatory nor inequitable. For instance, the most advanced AI facial recognition programs often fail to identify persons of color, as recently addressed by the large tech companies, including Amazon, Microsoft and IBM in the laudable decision to take these programs used by law enforcement off the shelf for at least the next year. Further risks can be seen most prominently in our current use of AI to facilitate review of online media content, employment decisions, and healthcare opportunities.
During a time when we are largely limited to online interactions and social media to acquire news, AI is being deployed on its first solo mission, without human oversight, to determine the content people see on social media and news feeds. As Facebook Chief Executive Mark Zuckerburg recently confirmed, “Effectiveness has certainly been impacted by having less human review during Covid-19, and we do unfortunately expect to make more mistakes.” We have already seen these mistakes start to surface. The stakes are high for future mishaps: each could result in false facts being promoted to vulnerable communities, unmonitored verbal abuse or pornographic images solicited without recourse.
Consumer Reports recently demonstrated that, even with advertisements, which are more closely monitored than general posts — and COVID-19-specific content, which is likewise allocated greater resources for oversight — Facebook accepted ads providing dangerous false information such as: “Coronavirus is a HOAX,” and guidance to “stay healthy with SMALL daily doses” of bleach.
And while unemployment rates skyrocket at a pace that makes heads and numeric models spin, AI is being used to make pivotal employment decisions. AI systems used for hiring and evaluation have been challenged as biased and noncompliant with laws monitoring fairness and discrimination. Their use as the sole source for such determinations is especially problematic at this time given that joblessness trends are disproportionately affecting persons of color.
Perhaps the most consequential use of AI in this pandemic is in healthcare where essential decisions such as whom to contact, test or offer scarce resources are increasingly based on AI. Kai-fu Lee, a renown AI global expert, sees, “a clear roadmap of how AI, accelerated by the pandemic, will be infused into health care.”
These AI programs may be doubling down on past and current discrimination while determining who can access ventilators and intensive care, triaging patients to appropriate care settings and screening for COVID-19 symptoms. Results could be dangerous if the data are even slightly off, as Dr. Isaac Kohane, a Harvard Medical School professor warned.
In short, we are creating the perfect storm against persons of color and other underrepresented populations. They are the most at risk to contract COVID-19, most likely to lose their job and most vulnerable to biased AI denying them live-saving measures. While there are tremendous benefits to our accelerated use of AI during the pandemic, we’ve jumped in without the necessary checks in place to secure our safety and commitment to principles of justice — including avoiding unconscious bias and preventing discrimination.
Before this crisis, EqualAI joined leaders in the field to warn that we are at a dangerous tipping point: AI-related technologies are the fastest growing part of the economy, dominating our lives and industries, while simultaneously avoiding the requisite laws and standards to ensure consumer safety and fairness. EqualAI is working with key stakeholders to create a certification program to verify AI programs are tested comprehensively and routinely. In the interim, given the key functions that are being delegated to AI programs in the COVID-19 crisis, minimal standards and assurances for AI systems should be adopted before their mass distribution.
AI systems should comply with clear, basic and universal standards before their public use, particularly when being employed for vital functions such as healthcare delivery. Given the time sensitivities, we can require a check that is quick but still impactful — such as public confirmation that a company has reviewed their use of AI and is in compliance with safety requirements, as well as laws and regulations preventing discrimination.
In light of the crucial role that federal government support is playing during the pandemic, one of the most important solutions is to require assurance of legal compliance from the recipients of federal relief funding employing AI technologies for critical uses. Such an effort was started recently by Members of Congress, including Representatives Yvette Clarke and Don Beyer, Senators Ed Markey and Ron Wyden, and others to safeguard protected persons and classes — and should be enacted. Likewise, companies should follow the directive recently issued from the FTC, which asks employers to understand the data underlying the AI they use and ensure transparent and explainable outcomes.
Speed, ease, and efficiency should not blind us to the dangers of AI systems making life and death determinations based on factors that we would never accept if rendered by an actual human.
Miriam Vogel is President and CEO of EqualAI. Miriam previously served in the White House where she led the President’s Equal Pay Task Force. She also served as Associate Deputy Attorney General at the Department of Justice and, under Deputy Attorney General Sally Yates, led the creation and development of Implicit Bias Training for Federal Law Enforcement.