Introduction and summary
The February 2024 Center for American Progress report “Generative AI Should Be Developed and Deployed Responsibly at Every Level for Everyone”1 directed a spotlight onto the “significant disparity between the still minimal safety measures taken by developers in their first-party deployments and the almost nonexistent safety requirements for third parties deploying the models via API.”2 It recommended a series of commonsense changes for generative artificial intelligence (AI) developers to make, touching on enforcement of existing policies, abuse prevention, data and tooling, reporting, and transparency.
Six months after the original report’s publication, this report re-reviews the external-facing policies of six major AI developers—OpenAI,3 Anthropic,4 Microsoft,5 Amazon,6 Google,7 and Meta8—and finds that none have made the changes recommended by CAP. While some developers have safety requirements in limited areas of third-party deployments, no developer has the comprehensive safety measures recommended in the original report. CAP has emphasized the importance of a focus on these third-party safety requirements in comments to the National Institute of Standards and Technology (NIST) and elsewhere.9
While the earlier report lists all of CAP’s recommendations for a comprehensive approach to safety in third-party AI deployments, this report urgently highlights four priority policies that should be implemented immediately: end-user reporting, content safety tooling, standardized transparency reporting, and comprehensive documentation to assist deployer compliance.
Background
Generative artificial intelligence uses advanced foundation models to create new content, such as text or images. Developers are responsible for the foundational work of building, training, and refining AI systems, such as large language models (LLMs), that power generative AI applications.10 There are two main ways the public can use generative AI services and access these advanced foundation models: in first-party deployments and in third-party deployments. In first-party deployments, model developers make their generative AI services available directly to the public themselves. In third-party deployments, entities or individuals that are external and independent from the original developer of the AI system deploy the technology. They may use it in various applications, offer analytical insights, or develop derivative services based on the original technology.11 Often, they are accessing the AI model via an application programming interface (API).
OpenAI’s ChatGPT, Anthropic’s Claude.ai, Microsoft’s Copilot, and Google’s Gemini (formerly Bard) are examples of first-party deployments of generative AI. For free or a small fee, users can experiment with generative AI through a website or an app, become familiar with it, and help expedite the spread of generative AI into other—many preexisting—technologies. AI Overviews in Google Search show the average web searcher how Google’s customized Gemini foundation model can aggregate information across search results to answer complicated multipart questions.12 Microsoft’s GitHub Copilot helps software developers write and debug code quickly.13 First-party deployments offer valuable first-person data and feedback and help normalize generative AI with the public, but currently, the subscription business model of first-party deployments does not cover the enormous cost of developing and training a generative AI model.
Glossary
This follow-up report uses terms commonly associated with various software and laws in specific ways to discuss the usage of generative AI technology. AI documentation and risk management plans are largely silent on articulating the crucial distinction between developer and deployer. Because of that, a glossary of key terms and reasoning used in this report is provided below.
- Developers: Entities or individuals involved in the creation and development of AI systems. Developers are responsible for the foundational work of building, training, and refining AI systems, such as large language models, that power generative applications. In some cases, a single entity may function as both a developer and a deployer, managing the entire process, from AI model creation to its application and user interaction.14 For example, OpenAI is the developer of the ChatGPT LLM, and Google is the developer of the Gemini LLM.15
- Deployers: Entities or individuals that implement and manage AI technologies in user-facing applications or services. Deployers typically use the tools and models offered by developers, primarily through an application programming interface, to provide AI-driven services or features within their own products or platforms. This includes the integration of AI functionalities into apps and optimization of the user experience.16 Historically, those building using APIs and on platforms are also called developers, but in this report, “developers” refers only to those companies who built the AI models.
- First-party AI systems: AI systems that are hosted and operated by the developer of the AI-based technology. These entities not only develop the AI models but also manage their deployment and user interaction on their own platforms, such as websites or apps. For example, Google has developed the Gemini LLM, which is used to power Google’s Gemini chatbot (formerly named Bard).17
- Third-party AI systems: Entities or individuals that are external and independent from the original developer of AI systems. They are deployers of the AI systems and may use the AI technology in various applications, offer analytical insights, or develop derivative services based on the original technology.18 Often, they are accessing the AI model via an API. For example, Snap Inc. uses OpenAI’s ChatGPT via API to power its My AI bot in its app Snapchat.19
- Open-source AI models: AI models whose underlying source code, design, model weights, and/or training methods are made publicly accessible via open-source licenses. Meta’s Llama 3.120 and BigScience’s BLOOM21 are examples of open-source large language models.
AI developers are poised to expand their customer bases and make the most money through third-party deployments, which allow third parties to deploy the generative AI services in their own use cases via APIs. For example, a third-party wellness app could use a generative AI API to incorporate a chatbot to customize congratulations and encouragement for each individual user and their unique wellness goals. Another common use case for third-party deployments of generative AI services is in customer service help centers and chatbots. In third-party deployments, the AI developer may have no relationship or interactions with the end users, only the third-party deployer, so safety and security with these systems isn’t guaranteed.
In third-party deployments, the AI developer may have no relationship or interactions with the end users, only the third-party deployer, so safety and security with these systems isn’t guaranteed.
When directly responsible to end users in first-party deployments, AI developers include some bare-minimum safety features such as content-level abuse filtering and integrated reporting functions. But in third-party deployments, where AI developers are a step removed from the end users, safety seems to be optional—in practice, if not in theory. In theory, many AI developers “require” third-party API usage to comply with acceptable use policies,22 and some “require” disclosures to end users that an AI is being used,23 among other protections. In practice, there is little insight into the enforcement of such safety features, and there are few ways to report violations. The theoretical protections for end users that AI developers supposedly require are, in reality, nothing more than lofty, high-minded, and underrealized goals for responsible AI.
As AI developers fund their future work with increasing numbers of third-party deployments, the gap in safety measures between first- and third-party deployments will become untenable. It is simply not possible for generative AI companies to claim their services are safe without addressing the lack of safety requirements for third-party deployers or to meet their voluntary safety commitments to the White House24 and the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” announced at the 2024 Munich Security Conference,25 without clear and stringent safety requirements for third-party API deployments of their services. Generative AI developers must plug the enormous safety gaps in their systems, starting with implementing end-user reporting, content safety tooling, standardized transparency reporting, and comprehensive documentation to assist deployer compliance.
End-user reporting
Anyone using an LLM—at any time, in any capacity, and in any format—should be able to report potential violations of an AI system to both the developer and the deployer. Whether the generative AI service is generating hate speech or instructions to build a bomb, end users must be able to report potential misuses. While the end user in a first-party deployment has a direct reporting line to the AI developer, the end user in a third-party deployment only has a relationship with the third-party deployer, not the AI developer. For example, in OpenAI’s first-party deployment of ChatGPT, end users can report problematic outputs directly to OpenAI.26 In Snapchat’s third-party deployment My AI, based on OpenAI’s ChatGPT, end users report problematic outputs only to Snapchat.27 A third-party deployer that violates the AI developer’s terms of service or acceptable use policies, knowingly or not, has no incentive to self-report violations flagged by the end user. Direct end-user reporting to the AI developer—as well as the deployer in third-party deployments—is vital for ensuring user trust and safety and should be required and strictly enforced in all third-party deployments.
Microsoft’s28 and OpenAI’s29 feedback forms are available to end users in both first- and third- party deployments through their websites, but that means a third-party end user must hunt through the companies’ websites and help centers to find the feedback forms to provide a report to the developer. Mobile app stores, such as Google Play, require in-app reporting for generative AI apps on Android devices, but those reporting features are not necessarily reflected in website versions of the applications.30 Microsoft, OpenAI, and Google Play do not require deployers to forward copies of the abuse reports they directly receive. Without insight into abuse reports made directly to deployers, AI developers cannot know how deployers are integrating the AI products into deployments and whether they are complying with the developers’ terms of service and other policies.
No safety tool can be 100 percent effective, and end-user reporting is an essential last line of defense for an LLM and its safety systems.
Content safety tooling
While many of the AI developers discussed in this report are building safety tools, all developers should require them. Moreover, having these tools turned “on” must be the default setting for all first- and third-party generative AI applications. Content safety tools include input/output filtering, limiting inputs to validated dropdown fields instead of allowing open-ended inputs, and prompt engineering that limits the pool of information a generative AI service can pull from in creating content. Limiting inputs into a generative AI platform to validated dropdown fields instead of open-ended prompts can be more secure and prevent users or systems from experiencing abuse. For example, an AI image generator could include one dropdown field for artistic styles—such as cartoon, abstract, cubist, and the like—and one dropdown field for the artistic medium—such as oil paintings, murals, sculpture, and others. Specifying and narrowing the pool of information a generative AI service can draw from in creating content reduces the likelihood of misuse and helps ensure the resulting output is based on credible sources.
Content safety tooling has an outsize impact on end users’ experience of a generative AI service—shielding them from the accidental creation of harmful content and shielding others from intentional and malicious attempts to create harmful content—and AI developers are in the best position to create content safety tooling for their generative AI services. Content safety tools only work when turned on, though, and as long as deployers are free to turn off or never turn on the tools in third-party deployments, end users remain vulnerable.
All six of the AI companies reviewed have content safety tools available, and at least OpenAI, Anthropic, and Google make them available for free and allow deployers to filter inputs and outputs from their third-party deployments, which encourages deployers to make use of the content safety tools.31 However, only Microsoft reviews and preapproves any use of Azure OpenAI with modified abuse monitoring, which allows a deployer to weaken or turn off safety features.32 The application for modified abuse monitoring includes detailed questions about the requesting organization, its size, and its industry, as well as seven narrow use cases in which Microsoft may allow modified content moderation.
Content safety tooling must default to being “on” in third-party deployments, and AI developers must require it to protect end users.
Standardized transparency reporting
Public-facing transparency reports have been instrumental in reestablishing public trust in social media companies and helping regulators, stakeholders, and end users understand the risks of using social media. To this end, AI companies must similarly publish standardized transparency reports to help external researchers, governments, and users understand generative AI systems and their impacts more clearly. Developers must also publish statistics about the frequency and kinds of enforcement actions they take against deployers and end users to show that they are diligent in enforcing their terms of service and usage policies in every deployment context. Transparency reports have also played a role in encouraging social media companies to consistently enforce their terms of service to improve the end-user experience and would likely have the same effect on generative AI developers.
AI companies may be reluctant to push aside the curtain, so to speak, but transparency reporting is a necessary part of maintaining public confidence in AI services. A standardized practice of industrywide transparency reporting would allow deployers and end users to directly compare AI developers and the user experience they can expect with each. These transparency reports should highlight the prevalence of violations across abuse categories for each AI service the AI company offers and include both first-party reports from developers and third-party reports from deployers.
Though Microsoft,33 Google,34 Amazon,35 and Anthropic36 publish transparency reports about the capabilities of their AI models, they do not disclose the number of violations of acceptable use policies or terms of service that they detect or are reported to them. Conversely, OpenAI has publicized a subset of this information; the company released a report in May titled “AI and Covert Influence Operations: Latest Trends”37 that focused specifically on influence operations using its products, which are prohibited by OpenAI’s usage policies.
In summary, AI developers should commit to publishing standardized transparency reports that:
- Disclose the prevalence of violations across abuse categories.
- Include both first-party reports and third-party reports from deployers.
- Provide statistics about the frequency and kinds of enforcement actions they take against deployers and end users.
Transparency is a critical mechanism for external stakeholders to better understand these systems and to measure the efficacy of regulations and mitigations, including those mentioned in this report.
Comprehensive documentation to assist deployer compliance
Developers must provide clear and comprehensive safety documentation to assist deployer compliance with acceptable use policies and developer terms. For deployers, finalizing and deploying their application is top of mind—not carefully assessing compliance with each provision of a developer’s terms of use; acceptable use, privacy, and subscription policies; help centers; and more. As the number of third-party deployments increases, any oversights in deployer compliance will become magnified in the user experience. If deployers cannot incorporate safety tooling correctly or effectively, end users will be vulnerable to the accidental or malicious creation of harmful content.
Developers are in the best position to create documentation that guides and supports deployer compliance. OpenAI and Anthropic have published documentation on incorporating content moderation filters and other safety tooling,38 but developers have not included responsible use and policy compliance in their quick start guides and deployer FAQs. As a result, deployers may still struggle to navigate the interconnected web of policies and piecemeal information scattered across help forums, API reference documentation, and legal policy centers.39 NIST’s work to develop these policies and guidelines could be a valuable addition, aiding both developers and deployers to do this critical work.
Conclusion
To be clear, these are the highest priority, but not the only, changes developers should make to increase the safety of their generative AI services. As the example of social media illustrates, end-user reporting and content safety tooling are the vital foundation for any trust between the public and a technology company. Standardized transparency reporting demonstrates commitment and accountability to the public. Comprehensive documentation to assist deployer compliance will ensure that deployers are set up for success and preserve public trust in a developer’s safety measures, no matter the implementation.
AI developers should especially value the public’s trust, as they seek to reassure us that they have learned from social media’s example and are working to create a better future for AI than that shown in science fiction.