Introduction and summary
As policymakers seek solutions to the current and future impacts of artificial intelligence (AI) on workers across the country, workers themselves are using their union bargaining power to negotiate contract provisions that prevent the elimination of jobs, place limits on surveillance and algorithmic management, and enable workers to benefit from productivity boosts offered by AI tools.1
AI and machine learning technologies2 are being used in new ways to automate nonroutine tasks, from writing code and human-sounding text to managing schedules, promising an increase in productivity for some workers. Nevertheless, many workers are understandably nervous that they will be denied the benefits of AI technology. Even as worker productivity increased over the past several decades, those gains went “everywhere but the paychecks of the bottom 80% of workers,” according to research from the Economic Policy Institute.3 For many workers, AI further threatens to automate parts or all of their jobs or worsen conditions by replacing human decision-making with algorithmic management driven by data harvested via invasive surveillance. Sam Altman, CEO of ChatGPT developer OpenAI, predicts “jobs are definitely going to go away, full stop.”4
While policymakers can take advantage of existing worker protections to ensure the use of AI in the workplace benefits workers and consider additional legislative steps, the examples set by unions make clear that policymakers also must complement these protections by strengthening the right to join a union and bargain collectively. Existing law can be applied in a way that protects workers from some of the potential harms of the use of AI in the workplace and hiring. Jennifer Abruzzo, general counsel of the National Labor Relations Board (NLRB), argued in a 2022 memo on electronic hiring and algorithmic management that many of the AI technologies used by employers are already illegal under settled law and urged the board to adopt a framework for protecting employees from surveillance and algorithmic management that interferes with protected activity.5 The Equal Employment Opportunity Commission issued technical assistance in 2021 on compliance with the Americans with Disabilities Act6 and algorithmic decision-making tools in hiring and employment as part of a larger algorithmic fairness initiative.7 Across the Biden administration, policymakers can further consider how existing law already regulates the use of AI.
Policymakers in Congress and the administration must center workers’ needs in their response to the use and development of AI8 through measures that strengthen workers’ right to come together in unions; ensure AI augments, rather than replaces, workers; prepare the workforce for AI adoption; and help meet the needs of workers who are displaced.9 Lawmakers in Congress are advocating for bills—such as the Stop Spying Bosses Act and the No Robot Bosses Act—that would protect workers from certain threats from AI. These policies should complement bills that strengthen unions, including the Protecting the Right to Organize (PRO) Act, which would stiffen penalties for union busting and enhance protections for workers trying to organize their colleagues,10 and the Public Service Freedom to Negotiate Act, which proposes strengthening organizing rights in the public sector.11
The Stop Spying Bosses Act and the No Robot Bosses Act
The Stop Spying Bosses Act and the No Robot Bosses Act, both introduced in Congress in 2023, address different aspects of employees’ work lives.
The Stop Spying Bosses Act12 would outlaw workplace surveillance for monitoring worker organizing or using AI to make behavioral predictions. The bill would also require employers who surveil employees to disclose their use of data collection so workers can be aware of how their data are being collected and used and would establish a Privacy and Technology Division at the U.S. Department of Labor to administer the law.
The No Robot Bosses Act13 would prohibit employers from relying exclusively on automated decision-making systems in making employment decisions such as hiring or firing workers. The bill would require testing and oversight of decision-making systems to ensure they do not have a discriminatory impact on workers, and when automated systems are used to help employers make a decision, employers must describe to workers how the system was used and allow workers or job applicants to dispute the system’s output with a human.
A comprehensive policy covering AI in the workplace would combine legislation to regulate novel uses of AI that harm workers, enforcement of existing law against uses of AI that already run afoul of existing employee protections, and stronger rights to organize a union and collectively bargain. Workers in some of the nation’s largest unions have worked tirelessly to negotiate over the uses of new technologies and set an example for how other unions can bargain over AI, but policymakers must open the door for more workers to advocate for themselves over how they can reap the promised benefits of AI.
Policymakers must open the door for more workers to advocate for themselves over how they can reap the promised benefits of AI.
Unions prevent workers from being replaced
In some industries where AI’s deployment threatens to replace workers, unions are winning control over the ways employers can use AI technology and how employees should be compensated. In late 2023, casino workers in Las Vegas represented by the Culinary Workers Union achieved a new contract that included a severance package of $2,000 for each year the employee worked if the employee’s role was eliminated due to “technology or AI.”14 AI was a contentious issue for workers in two of the largest and most highly publicized strikes in 2023: the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA)15 strike of actors and the Writers Guild of America (WGA)16 strike of writers. Both unions negotiated contract resolutions with the Alliance of Motion Picture and Television Producers (AMPTP) to address the risks AI posed to workers.
SAG-AFTRA, WGA, generative AI, and digital replicas
AI directly threatens to reduce or replace work completed by writers and actors alike. Generative AI, such as ChatGPT, can let anyone generate human-sounding scripts, screenplays, and articles based on natural language prompts.17 Generative AI technology also enables digital replicas that allow computer-generated models of actors to be visually inserted into scenes and give performances that actors were never present for.18 Both instances not only threaten to displace work performed by skilled writers and actors, but also use actors’ and writers’ preexisting work to produce the output. This means a writer’s own scripts or an actor’s own performances can be used to create new digital ones—and potentially replace the writer or actor. Both SAG-AFTRA and the WGA reached agreements with the AMPTP that for now alleviate much of the threat that workers’ visual likenesses or written work could be used without compensation or in place of work completed by union members. The SAG-AFTRA agreement includes provisions regarding consent and compensation for use of digital replicas powered by AI.19 The WGA contract places limits on how studios can use AI with human-written material, empowering writers to choose whether to use AI technology in completing their work and enabling writers, rather than the studio alone, to gain from the productivity benefits.20
Because unions democratically advocate for the best interests of their membership, they can also push for contract provisions that matter to workers and their unique circumstances, including workplace-specific concerns regarding the use of AI. In the film industry, credits for workers’ completed work on film projects earn workers money through residuals for reuse of credited work21 and help workers land new jobs and demand and receive higher earnings;22 credits have been the subject of negotiations with unions since the early days of organizing in film.23 As a result, writers run the risk of losing out on the compensation they deserve when their work appears on screen without being credited; AI increases the likelihood of this since it uses publicly available data created by humans, sometimes including copyrighted material, to create its scripts.24 The WGA fought to prevent AI from using material created by WGA workers and ultimately reached a contract with the AMPTP that explicitly includes a provision that AI-written material cannot be considered source material when determining writing credits.25
SAG-AFTRA
The contract SAG-AFTRA members ratified with the AMPTP in December 2023 covers television, streaming, and theatrical productions and has extensive language on AI and its use in producing digital replicas that look like real actors.
The SAG-AFTRA agreement defines an “Employment-Based Digital Replica” as a digital performance produced in “connection with employment on a motion picture” and “with the performer’s physical participation … for the purpose of portraying the performer in photography or sound track in which the performer did not actually perform.”26 This means producers making a movie can scan an actor’s performance, then use an AI-generated version of the actor to shoot a scene that actor was not present for, but the contract does not give producers carte blanche to use it however they want without paying the actor. Not only is the worker’s consent required, with a “reasonably specific description of the intended use” of the digital copy, but the worker is also entitled to their “pro rata daily rate or the minimum rate, whichever is higher, for the number of production days that Producer determines the performer would have been required to work had the performer instead performed those scene(s) in person,” along with residuals. These provisions mean the technology cannot be used to scan an actor and then use that digital likeness for free however the producers want without consent from and compensation for the actor.
The agreement also states that the technology cannot be used to create digital copies of an actor for productions that the actor is not already working on. These are called “independently created digital replicas,” which create “the clear impression that the asset is a natural performer whose voice and/or likeness is recognizable as the voice and/or likeness of an identifiable natural performer.” They, too, require consent with a specific description of its use, bargaining, and compensation.
Finally, actors negotiated compensation for and asserted control over using generative AI tools to create new performances by using old ones. Producers have to give “[n]otice to Union and an opportunity to bargain in good faith over appropriate consideration, if any, if a Synthetic Performer is used in place of a performer who would have been engaged under this Agreement in a human role”—making it harder for producers to simply use synthetic performers as a way to avoid hiring real actors. They also have to bargain with performers to use AI to generate synthetic performers that share some distinctive facial features with performers.
WGA
The contract agreed on by the WGA with the AMPTP in September 2023 amends the previous contract between writers and producers with an additional article, Article 72, that deals entirely with AI.27 The article makes clear that writing created by AI or generative AI cannot be “considered literary material” under WGA contracts. “Literary material” is the work product writers produce, so under the contract, the output of AI is not a substitute for the work of actual WGA writers.28 Section C of Article 72 goes further to cover situations where writers are hired to use AI-generated content “as the basis for writing literary material” and ensures that writers are still entitled to the payment, credit, and appropriate rights for their work that they would have enjoyed without AI involvement. The article further states that producers “may not require, as a condition of employment, that a writer use a GAI [generative AI] program which generates written material that would otherwise be ‘literary material’ … if written by a writer” and notes this would prevent, for example, a production company from forcing a writer to use ChatGPT to complete their work.
The contract reflects the “uncertain and rapidly developing” legal landscape around AI, and under it, “Each Company agrees to meet with the Guild during the term of this Agreement at least semi-annually at the request of the Guild and subject to appropriate confidentiality agreements to discuss and review information related to the Company’s use and intended use of GAI [generative AI] in motion picture development and production,” giving workers control over how AI use develops in the future. By establishing strong baseline standards to meet the current needs of workers and lay the groundwork for future negotiations, the contract offers an example for how workers can protect their needs in negotiating over AI.
Unions bargain over surveillance
AI-powered tools need vast datasets to function. Employers can harvest these data through invasive surveillance of workers, while AI companies may simultaneously gather even more information by processing large amounts of data automatically. If an employer plans to implement an AI-powered tool in its workplace, it may need data unique to its particular warehouses, trucks, or employee laptops. Employers may gather those data by tracking workers via means such as cameras and microphones,29 now powered by AI,30 as well as by monitoring work laptops issued to employees.31
The problem of employee surveillance is nothing new, and unions have been addressing workers’ concerns for decades since surveillance is not only invasive to workers’ privacy and a detriment to job quality but also a core part of the union-busting toolkit.32 The NLRB ruled as early as 1997 that the use of hidden surveillance cameras is a mandatory point on which employers must bargain with unions.33 The Communications Workers of America (CWA) has long negotiated over the use of surveillance in call centers, as many workers feared surveillance could be used against them to initiate disciplinary proceedings. Over the past 30 years, CWA contracts with AT&T, Verizon, Lumen/CenturyLink, and other telecommunications companies have placed limits on how many calls can be monitored, prevented recorded calls from being used to punish employees, and ensured call monitoring is used only to offer feedback for workers and to help train them.34
Today, surveillance continues to harm workers and poses a renewed threat to workers’ ability to come together in unions.35 Efficient AI-enabled processing of bulk data from sensors such as cameras allows more widespread use of surveillance in the workplace, reducing workers’ privacy and harming job quality.36 For example, workers have alleged that Amazon uses its data to calculate abstract performance metrics37 for its warehouse workers that place pressure on employees to overwork themselves and contribute to high rates of injury, stress, and taking unpaid leave,38 creating what one group of researchers called a “corporate police state.”39 In response to this, Amazon said it has made progress in improving warehouse safety, does not use camera technology in its warehouses to monitor employees, and offers workers multiple methods for reporting safety concerns.40 Workers have also alleged that Amazon monitors employee communications for labor organizing efforts,41 and journalists have detailed Amazon’s surveillance of private Facebook groups used by workers.42 The NLRB ordered the unsuccessful 2021 union vote at an Amazon warehouse in Bessemer, Alabama, to be rerun after a regional director of the NRLB found that Amazon “created the impression of surveillance” during the mail voting process.43
At the same time, Amazon workers who advocate for unions see unions as a means of addressing workplace surveillance, especially the use of surveillance and automated systems in disciplining employees. Chris Smalls, a former Amazon warehouse employee and the organizer who led a successful unionization vote at Amazon’s Staten Island warehouse in 2022, cited an algorithm-driven employee monitoring system as “one of the big reasons people want to unionize” in 2021, asking “Who wants to be surveilled all day? It’s not prison. It’s work.”44 Joshua Brewer, who helped organize the unsuccessful 2021 union vote at the Amazon, similarly said when asked about one reason why many workers want a union: “The workers answer to a lot of robotic information systems that deliver their discipline, and they have no say in it.”45
The Guardian reports that surveilling calls remains common at call centers,46 which unions have argued worsens job quality.47 Other industries are experimenting with new forms of surveillance. Hospitals have tracked nurses via a range of technologies including sensor badges that monitor when they wash their hands,48 and Uber has long tracked drivers via their smartphones, using the data to develop algorithms for predicting driver performance, for the purpose of increasing safety.49 The use of surveillance to curtail organizing is such a risk for workers that the NLRB general counsel issued a memo in 2022 urging the NLRB to recognize the threat that surveillance poses to worker organizing.50
The harms posed by surveillance can be mitigated through collective bargaining. In 2022, UPS began to refit its trucks with surveillance cameras that monitor drivers inside their trucks and can continually record and stream data.51 This surveillance increased pressure on workers and interfered with their ability to complete their work in a manageable way, including sorting packages in extreme summer heat. Drivers voiced frustration over the company’s choice to install new surveillance features in their vehicles while it refused to install air conditioning, with one driver alleging “surveillance and discipline are used to make us work faster.”52
The agreement the Teamsters reached with UPS in 2023 after their nationwide strike curtails the use of surveillance in trucks and prevents the potential replacement of workers with automated technology.53 Article 6, Section 6 of the contract covers tracking and surveillance technologies and their use in discipline. The protections are strong and guarantee that human managers are involved at every step of the process, ensuring, “No employee shall be disciplined based solely upon information received from GPS, telematics, or any successor system that similarly tracks or surveils an employee’s movements unless they engage in dishonesty.” Dishonesty explicitly does not alone consist of a “driver’s failure to accurately recall what is reflected by the technology,” and UPS “must confirm by direct observation or other corroborating evidence” behaviors that could result in firing or discipline, preventing tracking technologies from being the sole witness to alleged violations and giving workers a layer of human protection to ensure technology is being used fairly in the disciplinary process. Workers also cannot be warned about infractions based on tracking systems without first having a “verbal counseling session” about that infraction. Surveillance cameras for recording audio and video inside a vehicle cab are banned, and cameras that face outside the vehicle also cannot be used for discipline. These provisions alleviate the burden that surveillance places on workers to overperform in order to avoid being disciplined based on tracking or surveillance technology without human involvement.
Article 6, Section 4 establishes a committee between UPS and the Teamsters to review planned technological changes, which are defined under expansive language to include “any meaningful change in equipment or materials which results in a meaningful change in the work, wages, hours, or working conditions of any classification of employees in the bargaining unit or diminishes the number of workers in any classification of employees in the bargaining unit.” Under the agreement, UPS also agreed to notify the union well in advance of plans to implement any change that falls under this definition and to strike agreements with the union about the change being introduced. Furthermore, “If a technological change creates new work that replaces, enhances or modifies bargaining unit work, bargaining unit employees will perform that new or modified work,” ensuring workplace changes through technological advancement do not cut Teamsters out of the workplace altogether.54
Unions can help when workers have problems with AI management
For decades, employers have experimented with tools that offload management decisions such as scheduling and performance evaluation to automated systems. More recently, AI has enabled powerful automated management systems that sometimes conflict with workers’ ability to manage themselves or make decisions that workers do not understand or find unfair. Collective bargaining has enabled many workers to negotiate solutions that introduce a human element when needed and preserve the autonomy of individual workers on the job.
Read more
AI management
AI has increasingly been implemented to assign tasks, schedule workers, and evaluate performance, by offering a way to monitor performance metrics and needs in real time and assign work accordingly.55 Past uses of statistical and computational algorithms, however, have shown that optimizing for narrow productivity targets can harm workers.56
AI management of workers can strip workers’ power to manage the workload themselves. Housekeepers at a hotel that implemented algorithmic management of room-cleaning assignments described how the app-based management solution prevented workers from efficiently managing their own flows, denied them access to necessary information that would have made their jobs easier in deciding how to complete necessary room-cleaning work, and placed a higher workload on workers, all of which can negatively affect worker well-being.57 Academic research so far has found that algorithmic management of platform workers—that is, employees whose work is managed by smart technologies—can increase pressure on workers while making their income, workload, and scheduling less predictable and harder to manage.58 Higher stress and lower worker well-being can themselves negatively affect productivity as well, undermining the original goal of increasing productivity through algorithmic management.59
While the machine learning technologies now becoming mainstream may seem novel, the use of inscrutable algorithms for management, including hiring and firing, is not. Forty-four states and Washington, D.C., had implemented “value-added models” by 201560 to evaluate a teacher’s “value” to student academic achievement based on statistical analysis of student performance on standardized test scores.61 The models are difficult for nonstatisticians to understand, meaning teachers often could not get clear explanations as to why they failed the algorithm’s test. As the American Statistical Association warned in 2014,62 it was necessary to have “high-level statistical expertise” to “develop the models and interpret their [the models’] results,” which mirrors concerns today that AI is too difficult to explain.63 Despite this, some school districts used these models to make decisions about firing teachers. The District of Columbia Public Schools’ (DCPS) teacher evaluation system, IMPACT, introduced in 2009, resulted in DCPS Chancellor Michelle Rhee firing hundreds of teachers64—and a 2021 DCPS review found IMPACT had “disparate outcomes between white teachers and teachers of color.”65 The Education Value-Added Assessment System, implemented by the Houston Independent School District starting in 2007, resulted in 221 teachers’ contracts not being renewed by the school district in 2011.66 Seven teachers and their union, the Houston Federation of Teachers, sued the school district, and in 2017, the district agreed to stop using the system’s scores to terminate teachers unless the teachers could test or challenge the score independently.67
Driver deactivation
Today, many drivers for ride-sharing apps such as Uber and Lyft have suffered “deactivations,” made via algorithm, whereby drivers cannot accept new rides using the apps they rely on as a source of income.68 A 2023 survey of 810 drivers for Uber and Lyft in California found that 66 percent of drivers were deactivated at some point by one or both of the companies, and 30 percent of surveyed drivers who were deactivated report being given no explanation by either Uber or Lyft.69 The automated and opaque system powering deactivations results in workers being terminated without an easy-to-understand explanation.70
To combat this, ride-sharing drivers in Seattle organized Drivers Union, an association to help them advocate collectively for protections at the state, local, and industry levels.71 Because American labor unions must organize on a workplace-by-workplace basis, many workers, including app-based platform workers, cannot organize via a traditional NLRB-certified election. Instead, drivers pushed for better policies from their state and local lawmakers.72
In 2021, Seattle implemented a Transport Network Company Driver Deactivation Rights Ordinance,73 which established a panel that allowed drivers to appeal their termination to a government-run workers board with representation from the state-regulated and Drivers Union-operated Driver Resource Center.74 The center was highly effective at helping drivers retain their jobs, with drivers getting their deactivations overturned in 80 percent of cases with Driver Resource Center representation, based on a study of more than 1,400 cases.75
Drivers also advocated for a state law that came into effect across Washington state in 2023,76 guaranteeing minimum wages and paid sick time and workers’ compensation as well as expanding deactivation protections to cover the entire state.77 Drivers Union used this law as a foundation to reach a termination agreement with Uber in 2023,78 under which drivers who lose access for three or more days can file an appeal through the Driver Resource Center and, if no resolution is reached in 30 days, require the company to show just cause for the termination.79
Workers can act as partners in introducing AI to the workplace
While certain uses of AI technology can directly harm workers, it also promises productivity benefits, and unions empower workers to make sure they can share in the benefits. Although many workers are anxious about the introduction of AI into the workplace, some workers are hopeful that the technology can automate many of the repetitive, taxing, or undesirable tasks of their jobs and effectively ensure that work time is better spent.80 A 2023 survey by the Organization for Economic Cooperation and Development (OECD) found that a majority of manufacturing and financial services workers say AI has had a positive impact on their performance and mental health.81
Nevertheless, there is a risk that productivity benefits accrue only to employers, particularly if they find they need fewer workers to complete tasks and start laying off employees or “deskilling,” or lowering the skill level needed for existing jobs, in order to pay less.82 The 2023 OECD study also found that more than 40 percent of workers in manufacturing and financial services expect AI to lower their wages within a decade.83
Last year, several major unions and labor organizations won a seat at the table for workers in determining the future of the development and use of AI in their workplaces. In December 2023, the AFL-CIO and Microsoft created a platform for worker input into AI design and a dialogue over public policy to set guardrails for AI deployment.84 The CWA, which represents workers in industries with AI exposure such as game development and AI development itself, has developed bargaining principles for making sure AI works for its workers.85 Using these principles and building on their neutrality agreement with Microsoft, CWA workers at Microsoft video game subsidiary ZeniMax Media reached a tentative agreement86 that commits ZeniMax to provide notice about AI implementation while ensuring that its use of these tools boosts productivity and satisfaction without harming workers.87 Similarly, workers at the Financial Times editorial branch FT Specialist unionized with WGA East and ratified a contract that requires FT Specialist to “discuss in advance the introduction of any new technology,” and the union can bargain over the effects of these changes.88
How unions negotiate over new technology
Unions have long negotiated the introduction of new technologies into the workplace in a way that benefits workers. As one recent example, the COVID-19 pandemic resulted in as many as 35 percent of workers with jobs that can be done remotely taking advantage of technology to work from home all the time instead of in the office in 2023.89 Working from home became preferable for many workers who enjoyed saving time and money on their commute and being able to maintain a better work-life balance, though uncertainty existed over whether employers and workers would prefer working from home in the long term.
As a result, many unions that represent workers pushed for greater flexibility in their contracts. Workers at a range of companies won reductions in the number of days per week or month workers needed to report in person to the office, new ways for workers to request time to work from home, and increased availability of fully remote options. Nearly 700 technology workers at The New York Times went on strike in October 2023 over the publication’s return-to-work policy.90 At the federal Government Accountability Office, 2,500 unionized workers won an agreement for a flexible remote-work policy in September 2023,91 in contrast to many other federal workers being brought back into the office, as did a union representing 7,500 Environmental Protection Agency employees in 2021.92 Unionized university staff at schools including Harvard93 and within the City University of New York system94 solidified and extended remote-work policies as well. The CWA reached agreements in 2022 with AT&T95 and Verizon96 that established terms for employees working from home, including stipends for remote-work costs, pay protection during system outages, and limits on the use of webcams for surveillance.
A similar process is taking place where AI technologies are being introduced, although uncertainty over its exact uses remains. The CWA and ZeniMax reached a tentative agreement over using AI.97 ZeniMax agreed to limit itself to “uses of AI that augment human ingenuity and capacities, to ensure that these tools enhance worker productivity, growth, and satisfaction without causing workers harm.” While a final contract has not yet been reached, the language was developed as part of a “Proactive Bargaining” strategy by the CWA to deal with the effects of AI on its members rather than allow employers to take the first step and force unions on the back foot.98 Dylan Burton, a QA tester at ZeniMax, hailed the agreement: “This agreement empowers us to shape the ways we may choose to use AI in our work and also gives us the means to address those impacts before their potential implementation.”99
Conclusion
Technological change affecting working conditions is nothing new, and amid bringing new AI technologies to the workplace, unions give workers a powerful tool for ensuring AI benefits workers rather than making their jobs worse or leaving them without a job. Many researchers100 and labor unions are highlighting the need for unions to play a key role101 in negotiating the development and introduction of new technology into the workplace. Unfortunately, federal labor law makes joining a union far harder than it needs to be, denying many workers a voice on the job. Because of this, policymakers should take a comprehensive approach to AI that includes empowering workers to speak up on the job while ensuring they enjoy the benefits of AI technology in the workplace. Empowering workers to join unions via policies that strengthen the right to organize, such as the Protecting the Right to Organize Act, will enable more workers to use collective bargaining to advocate for themselves over the use of new technologies in the workplace.