Biden administration official talks AI accountability procedures at Pitt panel

Alan+Davidson%2C+assistant+secretary+of+commerce+for+communications+and+information+and+administrator+of+the+National+Telecommunications+and+Information+Administration%2C+speaks+at+the+University+Club+Tuesday+afternoon.

Khushi Rai | Senior Staff Writer

Alan Davidson, assistant secretary of commerce for communications and information and administrator of the National Telecommunications and Information Administration, speaks at the University Club Tuesday afternoon.

By Khushi Rai, Senior Staff Writer

The National Telecommunications and Information Administration is launching a request for public comments on artificial intelligence accountability, Alan Davidson announced Tuesday. 

“Real accountability means that entities bear responsibility for what they put out into the world and the measures we’re looking for are part of that accountability ecosystem,” Davidson said. “AI systems are often opaque, and it can be difficult to know whether they performed as claimed. For example, are they safe and effective in achieving their stated outcomes?”

Davidson is the assistant secretary of commerce for communications and information and administrator of the National Telecommunications and Information Administration. He said comments can include what policies should support the development of AI audits, assessments, certifications and other tools that can foster trust in AI systems.

Pitt’s Institute for Cyber Law, Policy and Security held a panel discussion on AI accountability policies with keynote speaker Alan Davidson, who serves under the Biden administration, at the University Club Tuesday afternoon. Panelists included Nat Beuse, Ellen Goodman, Deborah Raji and Ben Winters. The panel discussed bias, transparency and audits related to AI systems.

Davidson acknowledged positive innovations related to AI, including the acceleration of the COVID-19 vaccine. He said if deployed properly, AI could create new economic opportunities, including jobs. However, he said it is necessary to face the negative consequences of the growing use of AI as well.

“It’s also becoming clear that there’s cause for concern about the consequences and the potential harms from AI system use,” Davidson said. “There are risks to privacy, security and safety, potential bias and discrimination, risk to trust and democracy and the implication for our jobs, our economy, the future of work.”

Raji, a Mozilla Foundation fellow and computer science doctoral student at the University of California, Berkeley, said an audit includes an evaluation component and accountability mechanisms. She said the evaluation component ensures that AI systems are legally nondiscriminatory.

“We’re trying to articulate a set of expectations we have once we release the model out into the world and measure what that performance or the behavior of the model is compared to what we would expect,” Raji said.

Raji said sometimes AI technology can be manipulated to falsely pass anti-discriminatory audits. She said when auditing AI-related employment technology that companies use to hire candidates, the algorithm can hide discriminatory elements. 

“Of course, there’s concerns about fairness,” Raji said. “But before all of that, we should actually really be questioning what’s being put on the market and being branded as AI. The FTC has recently actually really leaned into this questioning of like, is it false marketing if you say this is AI technology that does a certain thing, but your model or your product doesn’t actually live up to its claim?”

Winters, senior counsel at the Electronic Privacy Information Center, said it’s crucial for AI to be transparent, because there have been instances of people not understanding AI systems, which have led to “power imbalances” and “rampant discrimination.”

“We’ve seen where there [have] been faulty predictions for a kidney transplant, which [means] — Black people will get lower on the list of a kidney transplant,” Winters said. “We’re seeing these really horrible examples of our systems, and you see it in the outputs of ChatGPT. So I think as the consciousness sort of rises, there’s hopefully sort of political demands from lawmakers, constituents to help sort of advance policies.”

Davidson also said it was too early to determine the impact AI will have on jobs in the future. He said there was a time when it was predicted that calculators were going to put employees out of business.

“The key thing is going to be, you know, that there will be jobs that will be about managing how we use these systems,” Davidson said. “Human beings will continue to be better at a lot of things than machines for a long time to come.”