Authors’ note: For this report, the authors use the definition of artificial intelligence (AI) from the 2020 National Defense Authorization Act, which established the National Artificial Intelligence Initiative.1 This definition was also used by the 2023 “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”2 Similarly, this report makes repeated reference to “Appendix I: Purposes for Which AI is Presumed to be Safety-Impacting and Rights-Impacting” of the 2024 OMB M-24-10 memo, “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.”3
Read the fact sheet
The fact sheet lists all of the recommendations detailed in this chapter of the report.
The U.S. Department of Housing and Urban Development (HUD) and other housing regulators should consider addressing potential artificial intelligence (AI) risks to housing fairness and discrimination using existing statutory authorities in the Fair Housing Act and the Dodd-Frank Wall Street Reform and Consumer Protection Act. While Governing for Impact (GFI) and the Center for American Progress have extensively researched these existing authorities in consultation with numerous subject matter experts, the goal is to provoke a generative discussion about the following proposals, rather than outline a definitive executive action agenda. Each potential recommendation will require further vetting before agencies act. Even if additional AI legislation is needed, this menu of potential recommendations to address AI demonstrates that there are more options for agencies to explore beyond their current work and that agencies should immediately utilize existing authorities to address AI.
While the use of AI in housing decisions is not the primary perpetrator of discrimination, it has augmented existing historical inequalities and further obscured housing providers’ decision-making metrics. As Lisa Rice, president and CEO of the National Fair Housing Alliance, stated in a Senate AI insight forum:
These systems are still performing their originally-intended function: perpetuating disparate outcomes and generating tainted, bias-laden data that serve as the building blocks for automated systems like tenant screening selection, credit scoring, insurance underwriting, insurance rating, risk-based pricing, and digital marketing technologies. The ability of automated systems to scale can lead to, reinforce, or perpetuate discriminatory outcomes if they are not controlled.4
While the administration should address these underlying inequities, the 2023 AI executive order has specifically tasked HUD with at least addressing the harms caused by AI in housing,5 and many of the recommendations below build on this directive.
AI risks and opportunities
Access to housing is imperative to overall well-being, economic and social advancement, and safety. As the 2023 “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” highlights, AI has the potential to exacerbate unlawful discrimination in housing, including by automated or algorithmic tools used to make decisions about access to housing and in other real estate transactions.6
A note on terminology
In this section, the authors define “AI” expansively to refer not just to technologies incorporating recent advances in machine learning but also to algorithmic and automated decision-making technologies that have enabled discrimination in the housing context for many years now, especially with the prolific use of tenant screening tools.7
Specifically, the 2023 executive order on AI requires the Department of Housing and Urban Development and the Consumer Financial Protection Bureau (CFPB) to issue guidance targeting tenant screening systems and detailing how the Fair Housing Act (FHA), Consumer Financial Protection Act, and Equal Credit Opportunity Act (ECOA) apply to the discriminatory effects of AI in housing advertising, credit, or other real estate transactions8—ECOA proposals are included in chapter 5 of this report. Furthermore, the Office of Management and Budget (OMB) M-24-10 memorandum on “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” identifies minimum risk management practices that must be applied when using AI for rights-impacting purposes, including: “Screening tenants; monitoring tenants in the context of public housing; providing valuations for homes; underwriting mortgages; or determining access to or terms of home insurance.”9 The AI section on HUD’s own website identifies “potential risks associated with AI systems, such as fairness, bias, privacy concerns, and security vulnerabilities.”10
From the authors’ perspective, several kinds of AI applications could further entrench harms to consumers and tenants, including:
- Surveillance in public housing: Facial recognition and other biometric data are being used in public housing to evict residents for minor infractions as part of a broader surveillance network, another example of automated and machine learning technologies enabling and exacerbating long-standing harmful activities.11 Types of facial recognition are also identified in the OMB M-24-10 AI memo as presumed rights-impacting uses of AI.12 Reports indicate that HUD grant money has been used to install surveillance cameras, some of which are equipped with AI technology.13 In the wake of these reports, HUD has said that it will not fund future grants that use facial recognition.14
- Tenant screening: AI and adjacent tools—including automated processes that compound existing discriminatory assessments—can be used in public and private housing screening contexts, further perpetuating discrimination. Automated tenant screening programs often conceal factors that result in a negative recommendation, sometimes including years-old eviction notices.15 “Screening tenants” is cited in the OMB M-24-10 AI memo as an AI use by federal agencies that is presumed to be rights-impacting.16
- Allocation of subsidized housing: Relatedly, AI tools may be used by federal and state agencies to determine the allocation of subsidized housing or other federal programs. A 2019 report that analyzed common screening and prioritization programs used by federal housing agencies, including the Vulnerability Index – Service Prioritization Decision Assistance Tool (VI-SPDAT), found that communities of color received lower prioritization scores than their white counterparts and that individual white applicants were more likely to be prioritized for permanent supportive housing than people of color.17
- Appraisal: As HUD has recognized in recent proposed rulemaking, home appraisal programs have negatively affected marginalized communities: “While AVMs [automated valuation models] have the potential, if properly used, to reduce human bias and improve consistency in decision-making, they are not immune from the risk of discrimination. For example, the models may rely upon biased data that could replicate past discrimination or even data that could include protected characteristics, such as race, or very close proxies for them. Moreover, if an algorithm were to generate discriminatory results, the harm could be widespread because of an AVM’s scale.”18
Undervalued homes and entire neighborhoods can help fuel generational wealth gaps.19 A report from the Terner Center for Housing Innovation at the University of California, Berkeley, found that home valuations lower than the contract price are more common for households of color and significantly diminish a homeowner’s overall wealth.20
-
Online advertising: Online advertising is a critical component of many industries today, including housing. In 2019, Facebook (now Meta) settled with various civil rights groups and private parties over allegations of discriminatory ad targeting practices.21 Later in 2019, HUD filed suit against Facebook alleging violations of the Fair Housing Act, which resulted in a settlement in 2022.22 As HUD’s suit against Facebook alleged, online platforms may perpetuate racial discrimination in access to housing opportunities by targeting ads that exclude certain protected classes or other characteristics.23 AI and algorithms that target advertisements could cause similar harms and violations of the Fair Housing Act. As HUD noted in recent guidance, this can be both deliberate or unintentional but is illegal either way.24
-
Privacy: Online advertisements for housing and housing applications may be predicated on private or confidential information about potential consumers.25 Beyond data collection, housing providers and screening companies can misuse sensitive and personal information, especially in the context of eligibility determinations.26 This risk is amplified when AI-driven systems make automated decisions without transparency, leading to possible exclusion from housing opportunities based on obscure or inaccurate data.27 Aggregation and analysis of sensitive and personal data by AI can also result in profiling and discrimination, further exacerbating existing inequalities in the housing market.28
Current state
The Biden administration has already taken steps to address housing discrimination writ large, which can be exacerbated by the unregulated use of AI and adjacent tools. For example, the Action Plan to Advance Property Appraisal and Valuation Equity (PAVE) specifies several actions to bring AVMs into compliance with existing anti-discrimination laws.29 As a part of the PAVE plan, agencies are currently engaging in rulemaking under Section 1473(q) of the Dodd-Frank Act to address potential bias by including nondiscrimination quality control standards in the proposed rule.30 According to the PAVE plan, CFPB, the Department of Justice (DOJ), the Department of Veterans Affairs (VA), and HUD will issue guidance on the Fair Housing Act and ECOA’s application to the appraisal industry.31 HUD has already issued a letter informing all FHA participants that appraisals must comply with the Fair Housing Act.32
HUD also recently issued two guidance documents—one on tenant screening, “Guidance on Application of the Fair Housing Act to the Screening of Applicants for Rental Housing,” and one on advertising, “Guidance on Application of the Fair Housing Act to the Advertising of Housing, Credit, and Other Real Estate-Related Transactions through Digital Platforms”—that explain how the FHA protects certain rights when housing providers use AI technologies.33
The new HUD screening guidance outlines liability for housing providers and screening companies under the FHA, explaining how intentional and unintentional discrimination facilitated by AI technology may violate the FHA.34 Furthermore, the guidance highlights important considerations for both housing providers and tenant screening companies when using AI technologies, including choosing relevant screening criteria; ensuring the accuracy of records; providing transparency to applicants; allowing applicants to challenge negative information; and designing and testing models for FHA compliance.35 The guidance points specifically to credit history, eviction history, and criminal records as underlying information that is susceptible to recreating bias.36
The HUD guidance on advertising through digital platforms describes the responsibilities and liability for advertisers and ad platforms under the FHA.37 Specifically, the guidance illustrates several ways advertisers and ad platforms may violate the FHA, including by segmenting and selecting audiences based in part on protected characteristics or proxies, including via custom or mirror audience tools; limiting protected class groups’ access to housing-related ads; reverse redlining; and showing different content or pricing to different groups based on protected characteristics.38 Lastly, the guidance recommends that advertisers use platforms that manage the risk of discriminatory delivery, follow ad platform instructions, carefully consider the source of audience datasets, and monitor the outcomes of advertising campaigns.39 The guidance recommends that ad platforms should run housing-related ads in a separate process with a specialized interface designed to avoid discrimination in audience selection and delivery, avoid providing targeting options that directly or indirectly relate to protected characteristics, conduct regular testing, identify and adopt less discriminatory alternatives for AI models and algorithmic systems, ensure that algorithms are similarly predictive across protected groups, ensure that ad delivery systems are not resulting in differential pricing, and document information about ad targeting functions and internal auditing.40
HUD has also appointed a chief artificial intelligence officer (CAIO) in accordance with the taskings from the 2023 executive order on AI and the OMB M-24-10 AI memo.41
In 2013, the Obama administration implemented the discriminatory effect rule, formalizing HUD’s long-held interpretation of the FHA prohibiting discriminatory effects, regardless of intent to discriminate.42 In 2020, however, the Trump administration issued a revised rule, which purported to create defenses to disparate impact claims for entities relying on algorithms and other automated technologies.43 Of note, the 2020 rule allowed defendants to show that “predictive analysis accurately assessed risk” as a defense to a challenged policy.44 The 2020 rule never took effect and was rescinded by the Biden administration’s HUD in 2021, which finalized a rule mainly returning to the 2013 paradigm—and eliminating the Trump rule’s defenses related to algorithmic technologies.45
Relevant statutory authorities
This section explains how some statutes currently enforced by housing regulators could apply to AI. As explained in the introduction to this report, this list is by no means exhaustive, and each potential proposal would benefit from additional research and vetting.
Fair Housing Act
The Fair Housing Act makes it unlawful for “any person or other entity whose business includes engaging in residential real estate-related transactions to discriminate against any person in making available such a transaction, or in the terms or conditions of such a transaction” based on any protected class under the statute.46 The statute specifically prohibits discrimination in advertising, appraisal, public housing, and tenant screening.47 The act gives HUD the authority to conduct formal adjudications of complaints and to promulgate rules to interpret and carry out the act.48 Under this authority, HUD recently promulgated a rule reinstating HUD’s discriminatory effect standard, which clarifies that discriminatory effect—facially neutral practices that cause unjustified discrimination—is sufficient to violate the act’s prohibition on discrimination.49
Regarding advertising, Section 804(c) of the Fair Housing Act, 42 U.S.C. 3604(c), as amended, states:
It shall be unlawful to make, print, or publish, or cause to be made, printed, or published, any notice, statement, or advertisement, with respect to the sale or rental of a dwelling, that indicates any preference, limitation, or discrimination because of race, color, religion, sex, handicap, familial status, or national origin, or an intention to make any such preference, limitation, or discrimination.50
HUD has implemented this provision through rulemaking,51 and subsequent rulemaking expanded the definition of prohibited discrimination to include discriminatory effect.52 As highlighted above, the 2023 discriminatory effect rule determined that the 2020 rule’s third-party and outcome prediction defenses—both making it easier for AI-based discrimination—were unnecessary.53 HUD has since provided general guidelines for advertising and marketing and investigation procedures.54
The FHA covers tenant screening that results in discrimination.55 For example, a 2016 guidance document outlined that housing providers violate the FHA when the provider’s “policy or practice has an unjustified discriminatory effect, even when the provider had no intent to discriminate.”56 The guidance explains circumstances under which utilizing criminal records, which are inherently biased due to the criminal justice system’s disproportionate targeting of African American and Hispanic communities,57 may subject housing providers to liability under the FHA.58 Advocacy groups and the DOJ, in a statement of interest,59 have also argued that the same logic should similarly apply to other screening factors, such as credit, and rental and eviction records.60
The FHA also covers residential appraisal. The term “residential real estate-related transaction” is defined in the statute to include the “appraising of residential real property.”61 Courts have also relied on other provisions of the Fair Housing Act, including 42 U.S.C.A. § 3605, which prohibits real estate discrimination because of “race, color, religion, sex, handicap, familial status, or national origin” to prohibit discrimination in the appraisal industry.62 This prohibition includes real estate businesses that provide housing-related services that “otherwise make unavailable or deny.”63 Courts have observed that “an appraisal sufficient to support a loan request is a necessary condition precedent to a lending institution making a home loan.”64 Moreover, HUD has updated its general appraiser requirements to include nondiscrimination principles65 and has begun rulemaking on automated valuation models—which is discussed below.66
Under 42 U.S.C.A. § 3608, HUD must administer public housing programs in a “manner [that] affirmatively [furthers] the purpose of [the FHA],”67 including its nondiscrimination provisions. HUD has promulgated several rules under this authority, known as the Affirmatively Furthering Fair Housing (AFFH) rules.68 The AFFH rules apply to all federally funded housing programs, which must not only abide by nondiscrimination principles but also “take meaningful actions to overcome patterns of segregation and foster inclusive communities.”69 In 2015, the Obama administration promulgated an AFFH rule under FHA’s mandate for affirmatively advancing fair housing.70 In 2020, the Trump administration effectively eliminated the AAFH rule, leaving only a general statement of what constitutes a fair housing approach, with few policy requirements for local governments.71 In 2023, HUD proposed a new AFFH rule, largely restoring the 2015 rule and developing several key provisions, including requiring localities to develop equity plans, track their process in fair housing goals, and increase accountability through direct public complaints.72
Recommendation
Based on the aforementioned authorities, HUD could take the following action:
- Update the “Fair Housing Advertising” guidelines—a separate document from the newly released advertising guidance—elucidating Section 804(c)’s prohibition against discrimination in the advertisement of housing opportunities in the context of online advertising that relies on algorithmic tools or data, as required by the 2023 executive order on AI and consistent with the recent HUD guidance on advertising through digital platforms.73 Such guidance would be consistent with the DOJ’s settlement with Facebook, which targeted similar practices,74 and can specifically highlight practices that lead to housing advertisements being steered away from protected communities.75 Furthermore, the guidance should specify that companies providing advertising services using AI technologies are liable. The guidelines should mirror the responsibilities and liabilities outlined in HUD’s recent guidance.76
Dodd-Frank Act
Section 1473(q) of the Dodd-Frank Act amended Title XI of the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (Title XI) to add a new section, 1125, requiring automated valuation models to adhere to certain quality standards.77 Under this authority, the Federal Housing Finance Agency (FHFA) and other agencies have proposed a rule to improve the quality control standards for AVMs.78 The proposed rule applies to AVMs in making credit decisions or covered securitization determinations regarding a mortgage but does not mandate specific policies institutions must follow or cover nonbank entities.79 Key provisions in the rule require AVMs to “ensure a high level of confidence in the estimates produced; protect against the manipulation of data; seek to avoid conflicts of interest; and require random sample testing and reviews.”80
Recommendations
Based on this authority, the FHFA should take the following actions:
- Continue the rulemaking process on the proposed AVM rule but also include its application to all mortgage lenders—specifically nonbanks, given that more than half of annual residential real estate loans were made by nonbanks in 2022.81 Furthermore, the rule should include specific minimum standards for each proposed goal, potentially incorporating the National Institute of Standards and Technology (NIST) AI guidelines82 or relevant minimum standards developed in response to the minimum risk management practices anticipated by the OMB M-24-10 AI memo.83
- Specify, through the proposed AVM rule or additional rulemaking, that companies using AVMs must disclose their use to customers and allow customers to request nonautomated appraisals or seek valuation from alternative AVMs. The FHFA can do so using its broad authority in Section 1125 to “account for any other such factor that the agencies … determine to be appropriate.”84 This would align with the statute’s purpose to “ensure a high level of confidence in [AVMs],” “protect against the manipulation of data,” and “avoid conflict of interest.”85
Fair Credit Reporting Act
While HUD does not administer the Fair Credit Reporting Act (FCRA), it can help the Consumer Financial Protection Bureau and the Federal Trade Commission communicate statutory and regulatory obligations to affected entities in the housing space. Especially if the FCRA’s primary regulators update regulations and guidance to account for novel AI development, as the authors recommend in Chapter 5,86 HUD can collaborate on guidance explaining entities’ obligations in the context of AI. For example, under some of the recommendations the authors propose in Chapter 5, credit reporting agencies, such as certain tenant screening firms,87 would need to disclose their use of AI technologies; periodically assess whether their machine learning or other automated technologies result in discriminatory outcomes or take into account information prohibited by statute; and provide for human review of reinvestigation requests, which, in practice, would require individual traceability and legibility.88 Furthermore, users of credit reports, including landlords and property managers, may eventually need to disclose information about the use of AI or related technologies in adverse decision notices.89
Conclusion
The Department of Housing and Urban Development and other housing regulators play a critical role in ensuring fairness in housing and contemplating how to address the potential challenges AI creates. HUD’s AI work directed by the AI executive order is a critical start, and further utilizing its existing authorities, as outlined in this chapter, is essential. GFI and CAP hope this chapter will inspire regulators, advocates, and policymakers interested in how the federal government could update regulatory regimes to account for this new AI moment as it affects housing.
Read the fact sheet
The fact sheet lists all of the recommendations detailed in this chapter of the report.