Skip to main content

How the Use of AI Impacts Marginalized Populations in Child Welfare

As Artificial Intelligence (AI) becomes more embedded in everyday life, questions are starting to arise as to how AI will affect different aspects of society. NC State recently secured a $500k grant to launch an AI center in the College of Humanities and Social Sciences. This center is an intercollege, interdisciplinary project led by humanities scholars to “explore the ethical implications of the technology” (Garbarine, 2024). The center, referred to as Embedding AI in Society Ethically (EASE), “aims to facilitate interdisciplinary research” to “promote public and civic engagement by creating humanities-centric collaborations with research institutions and technology companies in North Carolina’s Research Triangle area” (Garbarine, 2024).

The creation of EASE reveals an essential question: How can we, as a society, integrate and use AI in an ethical way? There are and always have been many ethical questions that arise from the creation and use of AI, ranging from fear about the lack of humanity and compassion to concerns about jobs and tasks becoming obsolete for humans to complete. “The EASE Center proposes to establish a platform for the necessary integration of humanities perspectives in these conversations and be the powerhouse behind reflection and development of ethical AI” (Garbarine, 2024). 

At the Center for Family and Community Engagement, our interests are more specifically tied to how the use of AI impacts marginalized populations in child welfare. According to Cachat-Rosset and Klarsfeld (2023), “The risk of discrimination and unfair treatment in AI mainly stems from two major causes.” Those causes are biased AI learning databases, which “tend to reproduce and maintain initially discriminatory algorithms” and the other is “unconscious biases and stereotypes of AI designers, developers, and trainers who project their own representation of reality or society into their work, can cause discriminatory discriminatory behaviors from the AISs they develop” (Cachat-Rosset & Klarsfeld, 2023). These risks are certainly important to examine and understand to ensure that human oversight is a factor in preventing such discrimination. AI and these problems embedded within it are difficult to address and have the potential to worsen discriminatory practices. Continued examination and monitoring of AI impacts on vulnerable communities is certainly warranted. 

A recent article from Berkeley Law, “How Artificial Intelligence Impacts Marginalized Communities,” Ajanaku (2024) shares an example of this bias by drawing on an article from Wired by Khari Johnson: “algorithms used to screen apartment renters and mortgage applications disproportionately disadvantaged Black people due to historical patterns of segregation that have poisoned the data on which many algorithms are built.” Ajanaku (2024) also calls upon the Trade Commission, which wrote that although technological advancements like AI are meant to benefit all patients, “they have worsened healthcare disparities for people of color.” And this is just the tip of the iceberg. There are countless examples of how AI could use discrimination and bias to systemically harm vulnerable populations. 

These examples shine a light on the potential harm and bias that may continue to arise from the use of AI. In the realm of child welfare, AI and Machine Learning (ML) are also being integrated into procedures. According to an article by UNISYS, AI and ML are being used to “[enhance] efficiency and decision-making capabilities by facilitating automated data entry, intelligent document analysis, predictive analytics, and workflow optimization” in child welfare (Govindiah, 2023). Although there is a call for ethical considerations here, it is clear that not enough is being done on the front end to prevent problems from occurring. AI and ML in child welfare settings still have the potential to cause immense harm if and when bias in decision-making processes, problems with data privacy, and more arise. 

However, despite the risks associated with AI use in child welfare, there is also enormous potential for good. The use of AI to manage administrative tasks and streamline processes for social workers could help reduce their workload burden and mitigate burnout. A recent study found that approximately 73% of social workers respondents had elevated levels of emotional exhaustion (Ratcliff, 2024). Therefore, AI can be used to increase efficiency and automate data-entry, which could significantly help social workers. 

This is a complex and multifaceted problem facing our society today. Institutes like Trails (Trustworthy AI in Law & Society) and centers like EASE at NC State are essential to ensure that a humanities-approach is considered and that AI is integrated in ethical and considerate ways. Moving forward, we certainly need to consider how AI use in child welfare (and other realms) is being monitored with human oversight to prevent further bias and discrimination. Centers like EASE at NC State are crucial to ensuring that AI is introduced to different fields in an ethical and practical manner.