EP 133 Transcript: AI and Social Work


Producer
This episode of Social Work Talks is brought to you by the national association of Social Workers and the University of Texas at Austin School of Social Work's Moritz center for Societal Impact.

Lorrie Appleton
Welcome to Social Work Talks. I'm your host, Lori Appleton. Today's podcast touches the very soul of social work. When the subject of artificial intelligence, or AI, becomes part of the discourse in social work circles, we quickly forewarn our colleagues to beware. We're concerned and apprehensive about AI's innate bias, social inaccuracies, misguided counseling, and the possibility that AI can induce psychotic symptoms. Conversely, some social workers enjoy the benefits of using AI to expedite writing progress notes and maintaining case records. AI has become a valuable asset for those who are financially disadvantaged and may require resources associated with homelessness, basic needs, and affordable therapy. According to Joe Stevenson and Mithran Samuel in an article titled AI in Social Work Opportunity at Risk, they found that more than one in five, which is 21% of the 713 social workers who participated in their poll, had used AI tools such as Microsoft Copilot or Beams Magic Notes for daily social work. That's one in five social workers Social Workers Stand at a Crossroads we stand at a crossroads. How can we use AI responsibly and safely while staying true to the NASW code of ethics? Can we glance into the future and see how social work can utilize technology to expand our capabilities to serve our neediest clients? What lies ahead? When thinking about the juxtaposition between ethical practice, human connection, and technological innovation, we are so very fortunate to hear from our presenter, Dr. Laurie Goldkind. Dr. Goldkind is a professor at Fordham University's Graduate School of Social Service and editor in Chief of the Journal of Technology in Human Services. Dr. Goldkind's current research includes artificial intelligence in nonprofit organizations and social work practice. I can't wait to read about her research in this she holds an MSW From Suny Stony, Brooklyn, and a PhD from the Wurzweiler School of Social Work at Yeshiva University. Thank you for joining us, Laurie.

Dr. Lauri Goldkind
Thanks so much, Laurie. It's an honor to be here.

Lorrie Appleton
Two Lori's huh? So you earned your MSW and PhD in social work. Why did you choose social work as a discipline?

Dr. Lauri Goldkind
That's a great question. So I graduated, like many social workers, with an undergraduate degree in psychology and a kind of vague idea that I was going to help people in some way. Like many of our social work students today. That was quite a long time ago. At the MSW level, I was trying to figure out Sort of what that would really look like in practice. And so I had spent some time in the old days before there was crisis text line, there was 211 lines or crisis hotlines. And I did. I was a crisis and suicide prevention hotline volunteer. And I really entered into that work thinking that it would be kind of a test case to see how I would do in this helping orientation. And I found that work to be both the community around that work, but also just the work itself to be really kind of interesting and satisfying. And that led me into applying to an MSW program at SUNY Stony Brook. And at the time, Stony Brook had a concentration called Planning, Administration and Research. And that happened to suit my skill set. In my foundation year, I was in a generalist placement in a youth development organization on Long island in New York. And they, like many social workers, they sent us out to do home visits. That was probably my first and last home visit. It did not really work for my skill set, and I felt quite uncomfortable. And there's. It cut to today. There's a big conversation about sort of closing the front door on child welfare. But I was kind of acutely aware of the power differential being a very young, very white privileged person going into a not privileged person's home and kind of making an evaluation about their parenting. And so that having the possibility of a planning, administration and research concentration really did work for me and played to sort of all the things that I have was good at and have continued to kind of engage with in my academic career, like writing and thinking strategically about organizational development and research. And so I was really grateful and still remain grateful that social work is a big tent. Right. And it welcomes kind of all the different skill sets. And. And that was really my entry into MSW education. And then I think I kind of have a restless intellectual spirit. And I was in field, in the field for about 12 years and came to a point where I really was intellectually ready for something different and entered into PhD work at Yeshiva University. And I really had a nerdy academic crush on the person who was running Yeshiva University's doctoral program. Her name was Margie Gibbelman. She rest in peace. She has passed. But she was a really wonderful mentor to me about what intellectual inquiry could look like in the social work arena. And then my dissertation chair, similarly was Richard Caputo. He just recently retired. Both of those folks introduced me to the idea of kind of inquiry in a social work arena that would bring kind of social work values to the research process. But it's engaged me for quite a while. And I really have a privilege of working with students and mentoring students and mentoring new grads around both social work inquiry, but also sort of how do you have a just and dignified social work practice? So it's kept me engaged for a really long time.

Lorrie Appleton
Right. And then how did you shift to or include, I should say, the intersection between human behavior and technology is not.

Dr. Lauri Goldkind
Maybe not surprising to you? It is not a common set of interests across social workers. But even when I was in my practice life and I did maybe all of my practice work in and around New York City, including the work I was supposed to be doing, I was also always doing database management and presenting data in different ways and trying to understand how data could be used for social work to make workers lives better. And I also just frankly was always interested in digital stuff. And so I remember broadband dial up. I remember sort of all the things that have come before. We really had kind of a robust cloud infrastructure. Right. So when I think about original databases were sort of, they lived on a computer, they were access databases. Right. With the primary key that interest has sort of been a thread in my professional life for a really long time. And I, I think that it is, it has been, I think, both a curiosity and a frustration to me that social workers have been, I think generally, and we've written about this, but generally kind of hesitant to embrace digital tools. And I, I find that to be perplexing.

Lorrie Appleton
Yeah, I would, I would agree with that. And I also. But you, you helped me bridge that connection that I was, I was curious about. And I guess I believe there is a fear regarding technology. And you reminded us that we've been using technology for a long, long time. So when you said dial up, I went, oh my goodness, I remember that. So when talking about a fear in technology, we, I think as social workers, many of us have a fear of replacing or being reduced in some way because of technology. We're going to lose our jobs. And so how can we compete in this rapidly changing digital space given that we've been cousins to technology for a long time?

Dr. Lauri Goldkind
I think that we have some catching up to do as a field and as a discipline in how we think about, in how we think about acculturating students to certainly to newer digital tools, but also maybe some consciousness raising around the tools that they're already using. Right. So almost independent of what setting you're in, you will encounter different digital devices, digital tools. When you look at Hull House's mappings of their local communities and their neighborhoods. It does look a lot like data visualization. And so they didn't have, of course, a language 100 years ago for like GIS and geographic information systems, but what they were doing was literally mapping by color how people were living in those communities. And so I think that we have a. We actually do have a long history of some technological engagement, but it's not considered central to the work. And I think more than ever we need to start considering it to be central to the work. And so. Right. So if you ask people kind of what electronic health record they use, they have a ready answer for that. Right. That is a digital technology and that does, you know, I guess, depending on your perspective, make workers and clients lives better. We do have a history, right, since maybe the 60s and 70s of thinking about different pockets of social work and technology, particularly for folks who've been in the VA system because there's a shortage of workers in the va. Right. They were very early adopters of telehealth and remote based therapies. Right. Where is. Particularly in rural regions where there's very few professionals. And the way to maximize those professional services was to have them do video calls. Right. That's a very old form of technology. I think that the pandemic kind of cracked open an opportunity for us, particularly for sort of the naysayers who are like, you can't do therapy online. Which by the way was preceded by you can't teach clinical classes online. Yeah, so. So I think we've sort of debunked some of that. But we are kind of recalcitrant in our belief that human to human relationships is sort of the end all and be all. And I think that what we're seeing now in terms of different fully automated therapeutics and certainly how regular public. Right. The regular citizens or non technical publics are engaging with different large language model tools. I think it's signaling to us that we have to kind of open up our minds and change our thinking about how we engage in these tools.

Lorrie Appleton
That's a perfect segue for what I was going to ask you next. So you were interviewed in a blog titled Social Workers vs. AI Therapists. And you said, I think we underestimate how lonely people are. And you go on to say, as a profession, we underestimate how much harm therapists have done, intentionally or unintentionally, that someone would feel safer talking to a robot. I think that is fascinating. It's a fascinating concept. And can you tell us more about your thoughts on that subject?

Dr. Lauri Goldkind
I think that large Language models, which is a kind of AI, has kind of opened up the possibilities of what AI is and how individuals are engaging with it. And so in late 2022, a company called OpenAI releases a utility called ChatGPT. And ChatGPT is a large language model. It is a generative pre trained transformer, which means it is trained on lots and lots of data, primarily Western English speaking data, mostly text, but Also audio and YouTube and all Facebook videos and all of the primarily English speaking data. And ChatGPT. The innovation is that you can ask this chatbot a plain language question and it will respond to you with a plain text answer. So that comes out in 2022, and it is the most quickly adopted piece of digital technology maybe ever. And so the time to adoption of 100 million users was something like two weeks. The uptake for that particular tool has been phenomenal. So people, it's very easy to understand how to use responds very quickly. So that comes out in 2022. And in the beginning of 2023, maybe the first quarter of 2023, there starts to be a conversation on social media, on TikTok, in particular, of adults with autism spectrum disorders talking about how they like or feel Safe talking to ChatGPT more than they do their human therapists. And of course, as a social work researcher, we were like, oh, we need data on this. Let's find out what's happening. So we did collect some TikTok data and TikTok videos and did an analysis with a wonderful group of professors and students from Wayne State, Indiana and maybe Michigan. And we looked at how, how adult ASD disorder folks were talking about their ChatGPT use and how they found it useful. And for that group of adults, they felt like therapists did not understand them. They articulated a hard time connecting to therapists, and they felt like the human therapists they had experienced were not able to meet their needs and rather they felt much more comfortable talking to a chatbot versus a human human. That happens in and around 2023. And then early this year in 2025, there was a randomized control trial, which is sort of the gold standard for an intervention. A randomized control trial comes out about a tool called TheraBot, and it is a utility that is out of Dartmouth. And they run this randomized control trial for 12 weeks for individuals with substance abuse, eating disorders and depression, and maybe anxiety. And the findings at the end of those 12 weeks is that the chatbot is as effective or more effective than human therapists. Now, like, listen, we could take Apart the research, we could kind of poke holes in their research questions and how they define things. But I think that kind of therapeutic signals that one, that people are willing to use these things as a mental health professional or a mental health, maybe professional is a bridge too far, but as a mental health tool and support. And then secondly, that it does seem to suggest that there are some effects that are useful. Right. That are, are showing to have symptom relief. And I think that that is a signal to us, maybe the biggest signal to us that we should be kind of sensitive and tuning in to how our market. Right. Especially if you want to go into direct practice or in clinical practice, that we, we have some competition in the market where people feel like they are being effectively served by a non human digital tool.

Lorrie Appleton
Well, along those lines, during the pandemic, I was working with neurodivergent adults and some of them who were nonverbal showed up in the chat and it was fascinating and I was so pleased to hear from them and they were guiding me on how to use the technology. So now what I'm finding is people are coming into my office and They've been using ChatGPT to brainstorm alternative interventions in between sessions. And so as social work practitioners, should we be, should we encourage clients to consider using AI technology as a tool for change?

Dr. Lauri Goldkind
I think we say that we are right. A seminal sort of piece of practice wisdom is that we are meeting clients where they are. And there's a recent, there's a recent Harvard Business Review report about how people are actually using ChatGPT. And it's changed a little bit in the last year, maybe in the last two years, because the first, second and third and maybe seventh use in of the 10 use cases were therapy, coaching, finding a purpose in life and there was other social support function. So that's. Four of those use cases are really pointing to people having a perceived value of engaging with that tool. That is a completely non human thing. Right. It is just putting words together from a probabilistic perspective, like that's what a large language model does. I think there's a big knowledge gap between both the public who are using these tools and sort of what the mental health or the psychotherapy process is. That's sort of one gap. And then there's sort of a complimentary gap across professionals who I think are not trained right, to think about how to use enhanced digital tools in their practice. But we're also not training people to ask people about their digital lives. And I think that those that the confluence of those sort of two complimentary conditions. I think that's kind of challenging for us as social workers thinking about social media 10 years ago, like somebody's digital life or their social media life. Right. Was something that happened out in the metaverse or in the cloud and that was different than their in, you know, irl. They're in real life life. I don't think it is any longer possible to have that false dichotomy if we are kind of attempting to engage a whole person because people, there's digital harm, there's cyber bullying, there's deep fakes. Like there's a range of things that are happening in the online world that have an impact in the in person world or in the traditional world. So I think that, I actually think that those gaps are really challenging for us because it's hard for a professional, I think, to go and be comfortable saying, for instance, have you considered using ChatGPT to generate journal prompts? Right. That's sort of a super basic, mostly neutral kind of idea. I don't think most therapists would think to engage with that as sort of a homework kind of idea or an enhancement because they may not have been. They may not have tried it themselves for one, but for two, they may not be comfortable that that technology is a support.

Lorrie Appleton
Support.

Dr. Lauri Goldkind
Right. That is complementary to the therapy process. I think that that's challenging for us.

Producer This episode of Social Work Talks is brought to you by the National Association of Social Workers and the University of Texas at Austin School of Social Work's Moritz Center for Societal Impact. We are partnering on a new survey to learn how social workers are using artificial intelligence in their daily work. We want to understand what tools you're using, what challenges you're running into, and what support would make AI more effective and ethically grounded in practice. The survey takes about 10 to 20 minutes and your feedback will help shape future resources for the profession. You can find it right on the front page of our website @socialworkers.org and if you like, you can enter your email at the end for a chance to win a small prize. Emails are stored separately to participate. Protect your privacy. Thank you for helping us guide the future of AI and social work.

Lorrie Appleton
What you're talking about is so powerful because when I am on either the NASW forum or I am talking to colleagues, I will say now routinely, what do you think about AI? And the response typically is, don't like it, need to do something about it. And so it leads to this next question that I have, which is you're currently working with colleagues across the country with the NASW code of ethics and how it can positively impact AI and you truly have a seat at the table, which is just so cool. Have you found that weaving ethics into discussions regarding AI can be a vehicle for building greater mutual understanding? Even in this discussion, what you're doing is encouraging us, encouraging me to open up for our own discipline and for our clients well being.

Dr. Lauri Goldkind
I think that if we are committed to meeting folks where they are, we have to navigate this. Like, I don't, I don't think it's a similar to the pandemic. It may be the case that the advent of the large language models and the ChatGpts of the world, or Gemini or Quad, offer us the permission to engage with these digital things in a way that we haven't seen before, which could have tremendous upside, I think. So part of the project I think that you're mentioning is this clinical practice guidelines. And I have a few colleagues from around the country, namely Joanna Cresswell Baez is at the University of Colorado at Colorado Springs and Elizabeth Matthews, who's with me at Fordham, and they are helping to formulate or we are working on formulating clinical practice guidelines, particularly for use with large language models. Because I think that practitioners really need a roadmap that gives them permission to engage with digital tools in a way that they can be used safely, responsibly, ethically, where we're not just saying one, don't use this, which is sort of a conversation stopper, right, than a conversation opener, but also empowers people to know what they can and can't do and sort of what the limits of these technologies are. And one way that I've been thinking about this sort of constellation of ideas is in three buckets. One is around AI literacy and sort of what do we wish people knew, both our clients and our practitioners. So what do we want people to know about these tools? That is sort of a literacy level, right? Like how do I think critically about this thing that I'm going to use? And I think that we want, from my perspective, or my wish is that people understand that there's user bias in these systems as well as sort of the bias that the data has built into it in these systems and maybe a few other kinds of bias, namely one called syncofancy bias, which is the machine is designed to keep you, the user engaged. And because they're run by complicated neural networks that are designed to map onto brain or mirror, sorry, brain functioning, we don't 100% know how they're choosing the words that they're choosing when they put an output forward. But we do know that the system is designed similar to social media, to be sticky and to keep you engaged. And so if you think about that in a therapeutic content context, that means that where a human therapist, right, if you have a client in front of you, that might mean that you're saying things a client doesn't want to hear because we're not. We're supposed to help people navigate hard situations and that humans are messy. And the machine is designed to keep you engaged and to some degree tell you what you want to hear. That does not connote working with those hard problems. And I think that is a wish from my perspective would be to have both clinicians understand that so they can help guide clients, but also to have clients sort of understand that that part that's sort of a critical AI literacy part. Like I want folks to be able to think critically before they use an AI system to, particularly in these kind of high stakes decision making contexts. Then there's AI skills. So how do I maximize these tools in a way that gets me the efficiency gain that I'm supposed to have by using them? That might mean learning or investing time figuring out what's called prompt engineering. And prompt engineering is really a nuanced way of saying asking the machine a question, right? So the more that I interact with the, with the algorithm, the better my responses will get or the more relevant my responses will get. And so framing that question or framing that prompt in a way that gets me the information that I want is a skill, right? And it's sort of, it's a very comparable skill to thinking about literature searches, right? So if you, maybe you have. I haven't been in MSW school in a long time, but I remember back to like when you had to go to the card catalog and you'd go to look, walk to go get the article. Now we have like Google Scholar and a range of other tools that make that searching activity a lot easier. But we still have to know what we're looking for, right? And that to some degree is analogous to prompt engineering in that we want to be able to think both critically and, and in a way that helps us frame a question. So, so that we're getting the output that we want. From my perspective, that's an AI skill. And then there's a third component I think that is AI use. Like what are the okay use cases for AI tools and where do we never want to delegate our thinking to an AI tool. Right. There's a, there's a bunch of different lawsuits around the country right now with OpenAI which is the maker of ChatGPT as well as with a company called Character AI which makes an AI companion called Replica. And both of those suits are around young people. In one case it's an adult. But then there is a few suits about young people committing suicide where the machine it both didn't stop someone or report out to a human that the user was engaging in suicidal thinking. And we know that the person was engaging in suicidal thinking because we have the text transcripts from those conversations. Right. That is one kind of innovation in these things is that you can see what the history of a user's engagement with the system is. And I think that that is kind of a non starter. Right. We don't want people using AI tools to. AI tools should not be allowed to assist your suicide. Like that should be a non negotiable industry doesn't have the same do no harm ethos or liability and licensure that professionals do. And so I think that that's acceptable use. And what are the things I can do with an AI system versus the things that are really going to help me lose my license? Right. Like those career ending incidents. And so that's AI use. And I think those three components, literacy, skill development and AI use are sort of the stool in my mind like the three legs of the practice that can help a practitioner and a professional understand where are the limits with these things. So that, that's sort of on the practice side. And then there's another, there's sort of another mechanism for helping us kind of figure that out. And that's around regulation. Right. So every master's student in the country takes at least one semester, if not two of policy practice and hopefully comes out learning how to do policy advocacy. We hope policy and regulation I think offer us real possibility and potential towards reining some of these companies in and putting guardrails in place that offer some legal remuneration. Right. We want and need, I think that accountability. And so for instance, NASW Illinois helped to shape a piece of legislation that was signed into law I think in June that says that an AI cannot be a therapist so you cannot call. There's a tool called ash that's an AI therapy chatbot. ASH cannot operate in the state of Illinois. And there are other states that have comparable legislation. So Nevada has a similar piece of legislation. Utah has a similar piece of, of legislation. And there are other states that are have where that kind of legislation is pending. I think given our current federal climate, we are unlikely to see that at the federal level, like a federal protection. But a way to think about regulation is almost like title protection for social workers and, or counselors and psychologists, where we're really having a clear line in the sand that says that an AI system cannot be a therapist. And I think those two pieces, sort of the practice guidelines plus regulation, really could begin to create a comprehensive system.

Lorrie Appleton
So this discussion, which is so critical right now, encourages us to think, encourages us to consider possibilities. And when we start to shut down and say, here are the things that are wrong with it, we are also that you're looking at the ethical vantage point with other people so we can consider how we can use this technology ethically, morally. And that leads to my final question. And it's daunting, but I'm going to ask it. How do you envision the field of social work evolving within the next decade?

Dr. Lauri Goldkind
That's such a small question. Laurie, thank you so much.

Lorrie Appleton
I said it was going to be daunting.

Dr. Lauri Goldkind
I know, it's kind of a whopper. I think that. So it's interesting. I also have had the benefit of participating in a practice lab of social work academics and practitioners around the country that was run by my friend and colleague Laura Nissan, who is a foresight practitioner. And we looked at different foresight strategies to try and address exactly these kinds of questions, like what does a 10 year horizon look like for social work? We have the potential for a bumpy ride. There are so many different technological developments that are related not just to AI, but other kinds of technology. And so like in the sensing world, things like monitoring and services for older adults through wearables. And so wearables used to be something on your arm, but now it's actually people thinking about the fabric that your clothes are made of and having sensors literally built into the shirt that you're wearing so that it doesn't even require the user to do any effort to have the monitor, a monitor happen. And so I think that as the public is wrestling with therapeutics and digital therapeutics, we really need to, I think, think tactically about what kinds of AI literacies do we want our current students to have access to. And then also how do we help professionals who are just in the field to. To kind of gain that skill development? And so I think that I think we are going to have to continue to wrestle with like fully automated tools versus fully human tools. And my instinct is that in 10 years it will be somewhere in the middle. And my, I guess I would say my hope is that we are not landing at a system where. Where therapy is a very bespoke, privileged activity where individuals who have resources get a human and the poor or people who are not in the 1% get a fully automated chatbot. Right? And so there's a bigger chasm in treatment. And that's sort of one pathway. And then the other pathway is sort of social workers really being at the table and helping to design these systems so that we are able to sort of bring our core values. Right. And our ethics to the table. So how do you build systems with dignity? What does it look like to have a justice orientation in your AI system? And so I think those. We have a colleague who is. Who's very interested in bringing joy and dignity into the AI development process and having kind of non traditional voices raised up so that there's a more inclusive orientation into AI development. I do think though, in order for us to get to sort of that transformational future where social workers are engaged in the design and development cycle of digital tools, we have to kind of get a grip on these foundational pieces, right? The AI use, the AI literacy and the AI skill development so that we are able to kind of leap into that future.

Lorrie Appleton
Well, thank you so much, Laurie. It is so exciting to be a social worker right now. Thank you listeners for tuning in to this episode of Social Work Talks. We hope this program has enticed your curiosity and created questions which will lead you to become involved in rich, meaningful discussions and keyword here is discussions. And this has really, really caused me to be even more interested in the connection between technology and humans. You can find more information about Lori, her research and our topic in the show notes section@socialworktalks.org thank you for listening, social workers.