Washington, D.C. — A new report from the Center for American Progress highlights a critical, yet often ignored, aspect of responsible generative artificial intelligence (AI): the lack of safety requirements and tools for third parties who use developer AI models. This problem will only get worse, the report argues, as more third parties access and build generative AI models without any required security or safety features.
The report comes amid news of the widespread sharing of explicit deepfake images of singer Taylor Swift on social media, a high-profile debacle that underscores the urgent need for greater safeguards when it comes to generative AI.
The report outlines the differences in the generative AI ecosystem between the developers who create the AI models that can generate text, video, and audio and the third-party deployers who use these models through application programming interfaces (APIs). Today, nearly all generative AI developers lack requirements that security or safety features be included by third-party deployers using their AI models.
“Generative AI developers are not adequately safeguarding their technology from the unique risks posed by third-party usage of their models,” said Megan Shahi, director of Technology Policy at CAP and lead author of the report. “And deployers don’t offer end users adequate tools and transparency to safely and responsibly use these powerful systems because they are not required to do so by the developers. This represents a huge vulnerability and blind spot in the evolving responsible AI discussion.”
The report recommends that actions be taken by companies in the short term, as well as by civil society and government, to address the challenge of responsible AI implementation for third-party usage. These include:
- Both first- and third-party responsible AI tooling should be increased and prioritized to prevent abuse and increase transparency.
- Companies must shore up their safeguards for third-party usage by prioritizing enforcement of existing policies, abuse prevention, transparency, data, tooling, and reporting.
- President Joe Biden’s executive order on AI tasked the National Institute of Standards and Technology (NIST) with examining risk management for generative AI. That must include clear guidance on responsible AI requirements for third-party API usage.
- The White House should reexamine the voluntary commitments of leading AI companies to ensure they can be met under current conditions.
- The Federal Trade Commission should initiate a 6(b) study on what kind of safety requirements leading generative AI developers should require for third-party deployers using their APIs.
Read the report: “Generative AI Should Be Developed and Deployed Responsibly at Every Level for Everyone” by Megan Shahi, Adam Conner, Nicole Alvarez, and Sydney Bryant.
For more information or to speak with an expert, please contact Sam Hananel at email@example.com.