Washington, D.C. — Today, the White House Office of Science and Technology Policy (OSTP) and the Center for American Progress co-hosted an online event to explore current and emergent uses of technology in the health care system as well as in consumer products related to health. During the 90-minute conversation, a group of leading experts focused on the impact of new technologies on health disparities; health care access, delivery, and outcomes; and areas ripe for research and policymaking.
Today’s conversation was part of a series of six public events to promote public education and engagement in areas where data-driven technologies intersect with the lives of Americans. Announced last month in a call for a Bill of Rights for an Automated Society from Dr. Alondra Nelson, deputy director for science and society at OSTP, and Dr. Eric Lander, the president’s science adviser and director of OSTP, the events engaged a wide array of stakeholders—in industry, academia, government, civil society, and the general public—in a national endeavor to make sure new and emerging data-driven technologies abide by the enduring values of American democracy.
The Bill of Rights for an Automated Society series brought together a variety of practitioners, advocates, and federal government officials to offer insights and analysis on the risks, harms, benefits, and policy opportunities of artificial intelligence (AI) and other automated technologies. Dr. Nelson and Patrick Gaspard concluded the final discussion in the series with an urgent, wide-ranging discussion about the path forward for protecting rights and promoting equity in emerging technologies.
During her introductory remarks, Dr. Nelson quoted President Joe Biden as aptly describing the crossroads at which we find ourselves: “This could be a moment marked by peril…or we can make it a moment of promise and possibility.” She continued, “That’s why the administration is fighting to get research and development right—to ensure new technologies are rooted in our common values: equality, accountability, justice, and integrity. We need a roadmap—a set of democratic principles—to ensure that all Americans can share in the benefits of innovation.”
“We need to think not only about innovation, but also about access and equity and the role that the government plays in facilitating that access and equity. Because it’s that lack of it that currently means the most vulnerable communities don’t receive the same quality of health care as their fellow Americans,” said Patrick Gaspard, president and CEO of the Center for American Progress. “These are moments where technology can lift up the veil of opacity over rights and freedoms. But we also know we are living at a time when information, data, can be weaponized in ways that can lead to great challenges in rights.”
The discussion was moderated by Micky Tripathi, national coordinator for health information technology with the U.S. Department of Health and Human Services. A few highlights from the discussion are included below:
David S. Jones, A. Bernard Ackerman Professor of the Culture of Medicine at Harvard University, stated, “Our health care system treats Black and brown people differently because of systemic, institutional, or interpersonal racism. We should stop using race in diagnostic and treatment algorithms. If we do include race and ancestry in the descriptive statistics of health care, we need to do it better. Race matters in medicine, but so do class disparities. … We need to end the use of simplistic race categories in medical decisions altogether. Progress can be made by shining a light on this problem. There is a role for regulation, but also a role for legal scholars, academics, and activists to point out when these issues come up.”
Jamila Michener, associate professor of government at Cornell University and co-director of the Cornell Center for Health Equity, added, “A core aspect of ensuring that AI is used for the purpose of advancing health equity and not eroding health equity is to consider the role of voice in shaping access, quality, and policies in AI technologies—how AI technologies can be deployed to fill gaps in voice and participation.” She continued, “We need to appreciate people’s lived experiences as expertise if we want to understand technology’s possibilities and use and limits. We should make sure that the people who have the most at stake and most to lose—losing bodily autonomy, losing access—are at the table when making decisions about AI needs that exist and determining which technologies to pursue, create, and use, and how to evaluate them.”
“There is an enormous amount of potential for algorithms to do great good in the health care system, but also potential for great harm. One biased example was prioritizing healthier white patients in front of sicker Black patients for extra primary care, extra home visits. … We don’t have a system for regulating algorithms in the same way we do for other products, such as drugs. … Algorithms should be measured against civil rights laws,” said Ziad Obermeyer, Blue Cross of California distinguished associate professor of policy and management at the University of California, Berkeley School of Public Health.
“Just because a technology uses an algorithm does not mean it is innovative—especially if it uses biases and advances structural inequities. We have to avoid being misled by the appearance of neutrality, innovation, and progress in thinking about artificial intelligence, and I worry that thinking of it as a medical device that needs to be regulated can make this even harder. This framing tends to focus attention on efficacy, accuracy, and access, and not on the structural assumptions underlying the technology,” said Dorothy E. Roberts, George A. Weiss University Professor of Law and Sociology and Raymond Pace and Sadie Tanner Mossell Alexander Professor of Civil Rights at the University of Pennsylvania. She continued, “From an intersectionality perspective, it requires the perspective of people’s status in the interlocking structures of inequality: class, citizenship, race. Technology can either embed, facilitate, and reproduce those unequal structures, or it can help to undermine and dismantle them. … Automated systems are especially good at hiding the backward assumptions that are built into them.”
Mark Schneider, health innovation adviser with ChristianaCare, said, “The system of care is not embracing the full potential of AI. We are at risk of creating a new digital divide. There are issues around broadband and subtle things like not having privacy during telehealth visits, access to intelligent devices. I’m concerned that this wave will create another digital divide like we had in the 90s.” He continued, “We are encountering resistance such as change management, lack of aligned incentives. AI is changing the way professionals do their work. This is the No. 1 challenge in dealing with clinicians—working with devices that can do something that previously only a doctor could.”
Watch the event: Bill of Rights for an Automated Society: The Health Care System
For more information or to speak with an expert, please contact Claudia Montecinos at email@example.com.