Artificial Intelligence: How Social Work Input Shapes Social Impact

computer screens

By Leigh Glenn

Synthetic-voice travel directions. Fraud detection. Social services initial intakes. Such automated functions integrate “artificial intelligence” and increasingly permeate people’s daily lives, even if they don’t know it. That lack of knowledge tends to drive an either/or response: avoid tech at all costs—possible, if one’s a hermit—or embrace it with no hesitation.

Between these two poles lies a question that everyone, perhaps especially social workers, needs to examine: Will I help shape how this technology unfolds and is used? And in light of COVID-19, which will exacerbate many already dire social issues, social workers at every level of practice may have a once-in-a-lifetime opportunity to address longstanding issues, with the help of algorithmic tools like those used in AI.

The alternative, to borrow from the late French theologian and philosopher Jacques Ellul, is to allow “the determinants” to be “transformed into inevitabilities.” To some extent, they already have, as many automated systems perpetuate inequities and create real harm.

Social workers don’t need degrees in computer science or certificates in coding. But their collaboration with those designing automation systems can help to assess assumptions, root out bias and develop tools that actually complement humans to address seemingly intractable social problems.

To get to “AI for social good” requires data, time, planning, simulations, testing and transparency, says Eric Rice, PhD, an associate professor in the University of Southern California’s Suzanne Dworak-Peck School of Social Work and founding co-director of the USC Center for Artificial Intelligence in Society (CAIS).

Algorithmic systems, deployed and tested prior to the COVID-19 crisis, could have, Rice suggests, saved much stress and grief with respect to supplies, location and allocation of ventilators and personal protective equipment.

Other determinants have raised questions, including how to navigate tele-health, which has been accelerated by COVID-19 stay-at-home directives. What if a social worker is approved to provide services through a web-based platform and the Internet goes out? If the social worker uses a landline phone to call the client back, is the social worker violating the law or billing regulations? 

These are among the questions that occurred to Jonathan B. Singer, PhD, LCSW, Loyola University Chicago School of Social Work associate professor and founder/host of The Social Work Podcast. What if there had been a social worker on the board of directors at Zoom?, he says. Could that person push for changes so that social workers can do what they need while patients and clients receive the services they’re entitled to?

What Is AI?

person reaching for digitized human form

Artificial intelligence approaches provide the scaffolding for automation. Yet AI sounds like an oxymoron: People have a sense of “artificial.” But “intelligence” means many things—from IQ to EQ, aesthetic intelligence, verbal, spatial or spiritual. It’s also context-dependent.

Computer scientist Melanie Mitchell, PhD, in “Artificial Intelligence: A Guide for Thinking Humans” says AI researchers have ignored these distinctions and focused on efforts to embed biological intelligence in computers. Instead of considering whether computer programs actually think like humans, researchers create programs “that perform tasks as well or better than humans,” she writes.

Singer says computer scientists don’t use the term “AI,” because it’s vague. That vagueness has led to a variety of approaches as well as boom and bust cycles within AI since the 1970s.

“Machine learning,” says Mitchell, is “a subfield of AI in which machines ‘learn’ from data or from their own ‘experiences.’” Developers assign weights to variables. The “learning” happens based on the weights and the available data. But to date, no program has learned on its own; humans still have to assign weights to the variables—leaving room for bias. An example: Race is not supposed to influence credit approvals and loan rates. But algorithms can be set to correlate ZIP code and socioeconomic data to direct decisions.

Virginia Eubanks, a political scientist and associate professor at the University at Albany, SUNY, wrote “Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor.” She cites the Allegheny (County, Pennsylvania) Family Screening Tool, which she says “does not actually model child abuse or neglect.” With limited data, the system uses proxies for child maltreatment. “One…is community re-referral, when a call to the hotline about a child was initially screened out but [Children, Youth and Families] receives another call on the same child within two years. The second proxy is child placement, when a call to the hotline is screened in and results in the child being placed in foster care within two years.”

This indicates a problem with referral bias, not screening bias; calls are placed “three and a half times more often” about black and biracial families than about white families, Eubanks writes. Which makes human screeners even more important, though the system was developed to strip bias from humans.

“Allegheny is a good example of the need to have an advisory board made up of advocates and people from the community affected by the tools being developed,” Singer says. And, if SARS-CoV-2 “has shown anything, it highlights how unequal things are, how certain people experience these disparities all the time, it’s nothing new for them.”

“The social work profession needs to be at the forefront of bridging these things,” says Singer. “Social workers have an obligation to get on the advisory boards of tech companies—from startups to established giants.” He envisions the Big 9 tech companies having a chief social worker as an executive or C-level person responsible for thinking through the social implications of tech development.

At the micro-practice level, Singer would like social work practitioners to ask themselves, “How do you feel about tech-mediated practice?” Who in recent years hasn’t been to a physician who asks questions and then types into a laptop or tablet, breaking eye-to-eye contact?

“As a practitioner, how would you feel if you knew an algorithm or code could help you to track what your client is saying, in real time, so that key concepts could be identified and entered into an electronic medical record?” Singer asks. This information could then, like a virtual assistant, consistently convey needs and corresponding resources and next steps, and prompt the client to set up appointments. That would free social workers to focus on what they love—helping clients.

Such tools could help social workers take a longer-range view of clients, kind of like the way engineers think about the future. For better and worse, technology has developed more around openness and ease of access to the detriment of security and identity. But, says Singer, the 1996 federal Health Insurance Portability and Accountability Act (HIPAA), gives people ownership and portability of their health records and the ability to grant access to those with whom they want to share those records. Doing so would help eliminate issues of breach of confidentiality and social workers could then advocate on behalf of clients, for their privacy, rather than trying to find provider-side solutions.

The current downside risks with algorithmic bias and client access/confidentiality point to the urgent need for social workers to reach through the tech thicket to their problem-solving engineer and computer scientist counterparts. That’s what Rice did.

Algorithms Help Tackle HIV

In high school, Rice planned to become a mathematician. But as a freshman at the University of Chicago, he learned about Jane Addams and Hull House and refocused on social issues. Rice co-founded CAIS, a joint venture with the USC Viterbi School of Engineering, with Milind Tambe, PhD, now director of the Center for Research in Computation and Society at Harvard. They also co-edited “Artificial Intelligence and Social Work,” which highlights AI tools in social work.

Rice met Tambe at a gathering for social work and engineering faculty in 2014. A chat led to their collaboration on an HIV-prevention project for homeless youth in Los Angeles.

“It might seem like AI and social work are a weird match,” says Rice, “but engineers and social workers are about problem-solving.” Engineers “don’t have the limitations we have as social workers. If you can imagine the problem, they imagine the new technology to solve that problem.”

The need for such tools is driven by the complexities of issues—such as the number of homeless people against available resources. In the CAIS project for HIV prevention among homeless youth, Rice and Tambe had to tackle levels of complexity similar to Google Maps—sheer population plus intricate networks—to determine, “Who is this small group of people who could cover the most people with HIV-prevention messages?” They wanted to locate homeless youth peer leaders, educate them, then have them disseminate timely information about preventing HIV—difficult in a population that might be temporarily associated with a particular drop-in center.

A year after Rice and Tambe met, they had an algorithm that could solve the problem based on existing data. But it was a broken version, Rice says, and it took another six months to fix before field testing it, which required eight months. By 2016, real-world application and algorithm-development issues began to collide, Rice says. “What we thought were the contours of the problem only encompassed some of the issues.”

It took another six months of tuning followed by another six months of field-testing. They had to tweak it again. With the fourth version, they enrolled 800 youth in the program, recruiting the best peer leaders. They reached 80 percent of the youth they intended to reach in a month. Two changes occurred: condom use improved and, based on 10 questions, homeless young people understood what was true and false about prevention.

Gathering that core group was tricky. Relying on adult drop-in center staff might have meant choosing young people who were “strangely marginal in the adolescent community,” says Rice. Financial hardship leads them to the streets and they distance themselves from other young people because “they don't want to get involved in the drama of street life.”

But the most popular youths, the ones most interconnected, tend to come from very dysfunctional backgrounds, may have suffered abuse and trauma, and may have PTSD symptoms. The algorithm, by running simulations, could choose among these groups those youth who could potentially have the greatest impact. This points to the benefit of this kind of work—the ability to cut through large swaths of data, simulate solutions and choose the best possible, thereby unburdening limited resources of time and money.

Algorithms and Youth Violence

screens and small human figures

Desmond U. Patton, PhD, MSW, is an associate professor at the Columbia University School of Social Work, founding director of SAFELab and co-director of Columbia’s AI4All summer program. He researches the ways in which social media serve as a “digital neighborhood” for many marginalized young people of color. The digital interface offers a sense of behind-the-keyboard anonymity, yet “hypervisibility” and “hyperbravado” too often transfer to the physical neighborhood, where the results can be deadly.

Focusing on Chicago, Patton has partnered with computer scientists for the last six years to develop algorithmic systems that can—from the natural language youth use in social media posts—discern aggression and threats and try to find ways to intervene.

“By the time [violence intervenors are] trying to interrupt, the argument has happened online and it’s too late to deliver in-person services,” says Patton. “We are trying to mitigate that gap.”

The algorithms can be written to detect aggression, loss and substance use. Loss and grief often precede aggression and threats. “Many young people are not using social media to fight each other, but as a place to process complex trauma and grief, which can fester in an online environment and become fodder for aggressive conversation later on,” he says.

Developing algorithms to detect future offline violence underscores the need for social worker involvement and most importantly, to involve young people, to understand what they’re going through and how they’re thinking, to provide context and align the algorithms with their lived experience, says Patton.

One example of developer bias that kept surfacing was the “N word,” he says. Tests ID’d the word as signifying aggression, yet within a particular context, it is not considered aggressive among young people. Still, “if used as evidence in the criminal justice system, 50 percent of their posts have aggression.”

The project continues to grapple with this issue, because the language can be “hyperlocal” and “hypernuanced,” he said. Yet, if the tool is used to identify aggression, but lacks context, they “run the risk of misidentifying a post that could be used as a weapon against them,” resulting in mass incarceration. The National Science Foundation is reviewing a proposal for this work. If approved, they will develop a pilot intervention.

Patton advocates going slower when developing algorithms. Social workers are needed, because they think about the social impacts of such tools, which could be efficient and useful but only to the extent “they don’t cause or replicate the same problems we’re trying to eliminate.” To that end, Patton and his colleague at Columbia, Courtney Cogburn, have developed a minor concentration within the social work master’s program that will feature three courses: a coding class in Python, a required emerging tech course that surveys the world of emerging tech, and a human-centered design course.

“We hope to train a cadre of social work students that can work in integrated tech teams to ensure the development and integration of ethical and humane tech,” he says.

Algorithms and Student Stress

On the opposite end of the AI spectrum are well-being tools such as Ask Ari, an interactive application launched last fall and available 24/7 for USC students to learn more about self-care. Developed by the Office of Wellbeing and Education and the USC Institute for Creative Technologies (ICT), Ask Ari is based on a system built by ICT a few years ago for veterans with PTSD, says Lyndsey Christoffersen, PhD, USC Campus Wellbeing and Education project specialist. That interactive tool, SimCoach, was for veterans who did not want to seek help for their PTSD. But those who used SimCoach were more likely to go to therapy, she says.

Students do not need to sign in and no identifying information is collected. “Users engage in a chat with Ari and are led through self-reflection to address their issues by video, audio, worksheets, links, and many other resources,” Christoffersen says.

She and supervisor Ilene Rosenstein, PhD, former head of counseling and assistant vice provost for Campus Wellbeing and Education, developed the content.

Topics—like life-skills organization, mood, unhealthy think and career—are based on focus groups and an in-depth, quantitative/qualitative questionnaire. The information provided is from evidence-based, peer-reviewed research, and USC subject matter experts are tapped to ensure the information is current.

When appropriate, Ask Ari may provide referrals, including to the USC Mental Health and Counseling, Relationship and Sexual Violence Prevention, LGBT Resource Center, and Veterans Resource Center, Christoffersen says.

To date, most users return to the app multiple times and many have referred friends. Some 93 percent felt more knowledgeable about how to take care of themselves, Christoffersen says.

Information and features are constantly being added, based on students’ feedback which, for Christoffersen, highlights the need to involve the people for whom an app is created. “I have learned it is important to listen to students because they are the experts on their own needs and wants,” she says.

Resources

Learn more about assumptions and bias in algorithmic systems as well as particular social work-enhanced projects that use algorithms and how social workers can become more involved in shaping them.

Automating InequalityVirginia Eubanks/The Open Mind

Weapons of Math DestructionCathy O’Neil, Talks at Google

AI and Bias SeriesBrookings Institution

Artificial Intelligence in Social WorkDr. Gabriel Crenshaw and Dr. Eric Rice

Youth Gun Violence Prevention in a Digital Age

Courtney Cogburn and Desmond Patton on Social Work, Media and TechnologySocial Impact LIVE

Envisioning the Future of Social Work: Report of the CSWE Futures Task ForceApril 2018

Why AI needs social workers and "non-tech" folks

Ask Ari Overview

Human Services and Internet Technologies Association

Desmond U. Patton, PhD, MSW

“The social work profession needs to be at the forefront of bridging these things. Social workers have an obligation to get on the advisory boards of tech companies — from startups to established giants.” Jonathan B. Singer, PhD, LCSW

“Social workers are needed, because they think about the social impacts of such tools, which could be efficient and useful but only to the extent “they don’t cause or replicate the same problems we’re trying to eliminate.” Desmond U. Patton, PhD, MSW