If AI Has You Feeling Clueless and Concerned, You're in Good Company

data made into clouds raining binary

By Clara Berridge, Lauri Goldkind, and John Bricout

Most people don’t understand what Artificial Intelligence does and many aren’t aware they’ve used AI tools. Worse still, a lot of people assume AI is smarter than they are. Despite this, AI tools have infiltrated child welfare, older adult services, disability services, therapy and behavioral health and others. While the infusion of AI tools in social work practice is spreading, many people say they know little about AI overall. In fact, in a poll of more than 300 attendees of our recent NASW webinar on AI—most of whom had an MSW—72% rated their knowledge of AI as “low’’ compared with only 1% who selected “high.’’

AI is a highly confusing marketing term that has since the 1950s referenced a range of different technologies and functionalities. There is no simple definition. It is used to refer to everything from automating decision-making (the use of algorithms to classify, suggest or make consequential decisions) to facial recognition, to generating synthetic text or images (i.e., ChatGPT, Bard). It is not surprising that most don’t understand it. The misleading jargon and anthropomorphizing descriptions (ascribing empathy, sentience, “brain power” and understanding) give it an air of mystique. This lack of transparency is a significant problem for social workers whose expertise and that of those they work with is desperately needed.

Don’t despair. First, we need to acknowledge that the tech sector is driving the train. Those who can code AI tools see the areas that social work focuses on as a business opportunity. The concentration of power is stark and dominated by private industry fueled by venture capital and investment—two fields that are wealthy, white and male-dominated. If it seems like big tech is swooping in with its tools du jour to solve social problems that it has little relationship to or direct experience with, well, it is. And that’s clearly a problem where social work values take a backseat to profit motive.

It is not ethical to move fast and break things, because unregulated, market-driven AI first harms those most marginalized. In her new policy brief on the need to build requirements for public participation into AI policy, poverty law scholar Michele Gilman lays out key areas where use of AI poses the greatest risks to civil and human rights, which demands the input and participation of a broader range of people. Examples of these are:

  • Access to government services and benefits
  • Gathering and retention of biometric, health, or other sensitive personal information
  • Surveillance of vulnerable populations
  • Policing functions of the state, such as law enforcement and child welfare
  • Gatekeeping access to life necessities such as housing, credit, education, employment, and health care.

Social workers, the points above should feel very familiar, because they are signature areas where social work is active. This should drive home the critical need for social workers to be engaged. AI has been deployed in public and private sectors in ways that foreclose housing opportunities, bar individuals from employment, and restrict access to health care services—and it incentivizes physical surveillance. But we are more likely to hear from AI enthusiasts about unrealized promises and potential of “tech for social good” than about these present-day algorithmic harms.

Automation bias is the tendency of people to assume that computers are objective, authoritative and fair, even when there is evidence to the contrary. It causes someone to go along with a machine’s output even if it doesn’t sit right with them. What we need are more confident gut checks by you. Gut checks also are a helpful counter to automation bias in practice. While awareness of the costs of automation bias doesn’t make one immune, social workers’ understanding of this phenomenon is crucial.

Calls for social workers to “be at the table,” helping to shape AI offerings feel empty or insincere, as few social workers are invited into decision- making before solutions are developed and sold to them. But no question is a bad question in an age of AI hype. Here are some to start with to help guide your engagement with AI, whether or not you’ve got high-level, decision-making power at an organization.

How were end users and impacted people involved in design and development? How were their experiences incorporated? (We know it’s problematic to develop a social work intervention without engagement of the professionals implementing it and the people it is designed to help, so we must also see that the practice of excluding people impacted by AI tools from the get-go is likely to cause harm.)

What’s being automated, why, and who benefits from this automation? How much money is invested in this? What are the potential opportunity costs? How well does it work in the specific case we’re using it for? How is success and accuracy measured? What are the evaluation metrics? Where does accountability lie when something goes wrong?

We haven’t dived into large language models (LLM) here, but we suggest reading or listening to experts like Emily Bender and Alex Hanna (co-hosted podcast: dair-institute.org/maiht3k), and asking questions that include:

What texts was the bot trained on, what or who is excluded, and which cultural assumptions feed their analysis? Regardless of whether it’s an LLM tool, it’s critical to understand on what data an AI tool was trained.

It can be easy to become fatalistic about unchecked AI applications in the context of such concentrated power at the level of big tech. It’s demoralizing when expertise stays in the realm of tech and systems become tech-dependent without clear understanding among core participants of what an AI application does. But there’s a lot of energy and creative work out there to flip the power script either through acts of refusal or by using AI in the service of social or economic justice.

We challenge you to consider how each of us can in our own spheres of practice contribute to naming and shifting this concentration of power.


Clara Berridge

Clara Berridge, PhD, MSW, is an associate professor at the University of Washington School of Social Work. Her research focuses on the ethical and policy implications of digital technologies used in elder care, such as monitoring systems and companion robots.



Lauri Goldkind

Lauri Goldkind, PhD, LMSW, is an associate professor at Fordham University and the editor of the Journal of Technology in Human Services. She is a network co-lead for the Grand Challenges for Social Work-Harnessing Technology for the Social Good. She can be reached at goldkind@fordham.edu.



John Bricout

John Bricout, PhD, MSW, is professor and chair of the social work department at the University of Texas at San Antonio. He is a network co-lead for the Grand Challenges for Social Work-Harnessing Technology for the Social Good.



Social Work Advocates National Association of Social Workers Spring 2024 Cover

Social Work Advocates Flipbook

NASW members, sign in to read the Spring 2024 issue as a flipbook

Viewpoints Disclaimer


Viewpoints columns are guest editorials about topics related to social work. They are written by contributors to Social Work Advocates magazine, and do not necessarily represent the opinions or reflect the policies of NASW. If you are interested in writing for Viewpoints, please email us at swadvocates@socialworkers.org.