Artificial intelligence (AI) may seem like an unlikely partner for the human-centric profession of social work. After all, social work is about communities, relationships, equity, and advocacy. But that’s exactly why, according to Dean and Distinguished Professor George Leibowitz, social workers have an important role to play in this fast-moving field.
“We can harness data science, AI algorithms, and predictive models for social good,” Leibowitz says. “Social workers need to be part of the conversation, so new technologies are informed by appropriate methodologies and theory and are guided by a deep knowledge of the communities we serve.”
Ethical AI
So far, social work has had what Leibowitz calls a “tenuous relationship” with AI. On one hand, the profession recognizes the immense potential of machine learning, predictive modeling, and data science to support social well-being. On the other, there’s legitimate concern about bias, ethics, academic integrity, and the risk of losing the human connection so central to the field. “There’s a fear that AI will replace humans or eliminate the relationships we have with our clients and communities,” says Leibowitz. “But human-centric AI—AI that requires human input to be meaningful and interpretable—actually depends on us.” Indeed, for more than a decade, social workers nationwide have been collaborating with engineers, computer scientists, and biomedical researchers to build data-driven tools that address pressing social issues and support clinicians in the field. These include chatbots that manage clinical intake, algorithms that develop treatment plans, wearable sensors that detect falls among older adults, and predictive models that assess risk for depression or opioid dependence.
At Rutgers, AI is beginning to play a larger role in social work education and practice. A workforce development grant from the Human Resources Services Administration is currently funding AI-powered telehealth training to help social workers and healthcare providers work together to reach patients and solve problems. More broadly, partnerships with the Rutgers Artificial Intelligence and Data Science Collaboratory (RAD) and with outside institutions like New York University’s Constance and Martin Silver Center on Data Science and Social Equity are helping shape local and national conversations around AI and community care.
At the same time, Rutgers’ social work researchers are collaborating with clinicians and engineers to co-develop transparent, trustworthy machine learning models (see below for more details on these projects). “AI and machine learning models are only as good as the input,” explains Associate Dean for Research and Distinguished Professor Lia Nower. “They’re not all-knowing programs. Using AI in an ethical way requires subject matter experts who know the kind of information that needs to be used and generated.”
This approach reflects what Leibowitz calls “community-driven informatics” or “human-centered AI,” where communities and advocates—not just coders—inform the development of AI tools. “Chatbots and other technology must be fed and trained to work well with input from human beings,” he adds. “When it comes to any type of AI that affects social work and its practice, our role is to interpret the evidence and algorithms and make good shared clinical decisions in concert with medical care communities. None of that goes away with AI and, in fact, it’s where social workers can be really helpful.” To that end, the school has partnered with Stonybrook University to build innovative, stakeholder-driven machine leaning models to predict a number of substance use outcomes. At Rutgers, the school has proposed a new interdisciplinary postdoctoral fellowship that combines social work, education, psychology, and data science to train future leaders in ethical AI for behavioral health.
Teaching the Next Generation
When it comes to AI’s value in the classroom, faculty reactions are mixed. Some worry that generative AI (like ChatGPT) encourages academic dishonesty. Others see it as a powerful tool. The truth is somewhere in between.
Assistant Professor Woojin Jung has integrated coding basics and AI lessons into her advanced statistics classes, where students learn how to use data in their research. But she’s also noticed an uptick in students copying and pasting directly from ChatGPT in their assignments and on online discussion boards. “In a typical class, one or two students out of 25 will use AI unethically, meaning they submit an assignment that’s not their own writing, is plagiarized, or uses illegitimate sources,” she says. “To combat this, I share these concerns and lay out AI guidelines in the syllabus. We have to be proactive about that.”
Jung’s point of view is substantiated by recent reports that encourage educators to integrate AI into syllabi and rethink grading standards to reflect the reality of these new tools. And if you ask Leibowitz, students should learn not only how to prompt a chatbot, but how to interrogate the source of its information, assess its validity, and use it to inform—not replace—critical thinking. “AI is here,” says Leibowitz. “We have to train students to use it ethically, to understand its biases, and to enhance their work.”
The School of Social Work is currently leveraging AI via simulation exercises. This prepares students for integrated care settings by training them to address behavioral health problems alongside physicians and other healthcare providers in various settings. “Students have the opportunity to practice in simulated environments so they can hit the ground running in a hospital, for example, where a team draws on the expertise of various disciplines,” Leibowitz says. “Simulation enables a quick transfer of knowledge that you just can’t get from reading a textbook. It’s experiential. Our goal is to teach students to treat real patients, and simulation helps us do that.”
No Need to Fear
At a time when headlines often paint AI as an existential threat to human professions, Leibowitz offers a different perspective: one of cautious optimism. “AI will not replace the human element in social work; actually, I believe it can enhance and improve practice,” he says, pointing to applications that deliver interventions to communities and reduce time spent on data collection.
While AI certainly still has its share of limitations, like biased facial recognition software and misinformed health chatbots, these obstacles are precisely why social workers need to engage. “Bias in data, cultural irrelevance, lack of transparency—these are problems social workers are trained to spot,” Leibowitz says. “Our values compel us to ask the hard questions and make sure the tools we use reflect the diversity, dignity, and worth of every person. If we align AI with our mission, and we train our students to do the same, we don’t have to be afraid. We’ll be better for it.”
At the School of Social Work, researchers are tapping into the power of AI to drive social change.
Flagging Problem iGaming Early
At the School of Social Work’s Center for Gambling Studies, Director Lia Nower is developing a machine learning algorithm to identify individuals who are gambling online at dangerous levels. “We know there’s a cohort of individuals who are overspending, but there’s currently no requirement that gambling platforms try to identify or help these people,” Nower says. “Problem gambling impacts families and communities because it can lead to unemployment, bankruptcy, crime, homelessness, suicide, and countless other adverse consequences. Gambling not only impacts the individual, but also those around the individual.”
Nower and her team identified a range of variables linked to escalating gambling behavior, such as placing larger bets, gambling more frequently, or spending a growing share of one’s income on gambling. “Machine learning can trace these patterns to find a tipping point that places individuals into a risk category,” she explains. “This is virtually impossible to do with human calculations alone.” Nower’s model is currently being used to inform regulators of patterns and trends, with the goal of encouraging them to require online betting platforms to adopt similar safeguards.
In practice, as risky gambling patterns escalate and persist, the gaming system would flag the individual so they can be offered appropriate resources like educational materials or, in more serious cases, a mandatory cooling-off period (a strategy already used in parts of Europe). “It’s really up to the regulators at this point,” Nower says. “Our goal is to give them the tools they need to make informed decisions.”
Targeting Poverty with Precision and Accuracy
Aid organizations delivering resources and assistance to underserved communities require accurate data to identify where help is needed. While broad data about countries or regions is readily available, detailed information about specific communities is much harder to collect. That’s because traditional methods—like household surveys—are time-consuming, and the sample size is limited. They may require travel to remote areas and can be incomplete, especially in rural regions or places lacking electricity or infrastructure.
But, thanks to AI, Assistant Professor Woojin Jung is able to pinpoint areas of need with extreme accuracy, even in developing regions.
Jung trained an AI model to analyze satellite imagery and identify areas showing signs of poverty that may require aid. The model analyzes features that are correlated with development like buildings (including homes, schools, and grocery stores), pixel intensity, bodies of water, and road conditions (paved or unpaved). Vibrant colors (versus dull, uniform areas) and nighttime light can also signal the presence of electricity and development. Jung then correlated these findings to those collected via household surveys about assets and income. “Once the model learns the correlations, it can predict wealth or poverty without the household survey,” she says. “That’s the key part, since the sample sizes for those surveys are limited and not all areas are surveyed, for example, less populated areas. Once we have this granular level of socioeconomic conditions, agencies can reach out to vulnerable populations with high confidence and more accuracy.”