What is the Role of School Counselors in the Age of AI?

Key Points

  • Schools must adapt to the increased reliance on AI for emotional and mental health support, ensuring proper safeguards and an ethical framework.

  • Educators and counselors need to prioritize AI literacy to equip students with the skills to safely and effectively navigate AI tools in their lives.

An undergraduate student from UCLA meets with high school students in an AP research class at UCLA Community School.

When I was a teenager, the idea of revealing my innermost thoughts and feelings to another human being was beyond comprehension. Now that I’m an adult, the idea of revealing those same secrets to an artificial being is almost inconceivably strange.

Back then, and still today, I’m an outlier, a fact underscored by the 28% of American adults and 72% of teens who seek emotional solace and advice from large-language models, chatbots, and AI companions. As a writer, I reveal my soul through published words. Most people don’t have that option so they turn to either carbon- or silicon-based companions. Being alone is a terrible thing, a troublesome fact of modern life revealed in data describing the epidemic of teen-age loneliness

A recent blog by OpenAI revealed the nature of these AI-human heart-to-heart conversations, though privacy laws prohibit us from knowing which chats originated with teens or adults. These conversations are profoundly personal and touch the core of the human experience.

This information points to the paramount importance of teaching students healthy ways of interacting with digital friends. The blog provides data that allows us to address the growing need for school counselors and to think deeply about what their role should be going forward. These technologies aren’t going away, and neither is the human need for connection.

What topics most frequently appear in AI-human conversations?

We will first look at the types of conversations people are having with bots and then explore ways we can introduce reasonable safeguards to the process. The source of our understanding is the aforementioned blog in which OpenAI details what it calls a “mental health taxonomy.” This taxonomy includes three major domains:

Psychosis, mania and other severe mental‑health symptoms. These include signs of delusions, hallucinations, mania, or other serious disorders. OpenAI estimates that about 0.07% of active weekly users show possible signs of mental‑health emergencies related to psychosis or mania. The taxonomy helps the model recognize “less severe signals, such as isolated delusions.”

Self‑harm and suicide. This domain covers conversations where the user expresses suicidal intent, planning, self‑harm thoughts, or related risk. OpenAI reports that about 0.15% of users in a given week have conversations including “explicit indicators of potential suicidal planning or intent.” The goal is for the model to respond safely: provide empathy, offer crisis resources, and avoid reinforcing self‑harm ideation.

While symptoms like depression are relatively common, its most acute presentation, according to the company, is already being addressed through work on preventing suicide and self-harm. 

Emotional reliance on AI. This is a newer category in which users appear to be forming an unhealthy attachment to the AI (“chatbot”) in place of human connections, or prioritising the AI interaction over real‑world obligations/relationships. OpenAI estimated that about 0.15% of active weekly users show indications of emotional reliance.

While these percentages are indeed quite low, keep in mind that more than 700 million people interact with ChatGPT on a weekly basis. While the percentages appear small, they translate into a substantial number — over one million individuals each week — given the platform’s massive user base. We will have a better idea of how many of these problematic chats are generated by minors when OpenAI launches its age-verification system later this year. 

Emotional Reliance on AI: A Growing Concern

Mental health issues have beset multiple members of my extended family, so I will never diminish the importance of any category from a list of maladies. That said, the issue of emotional reliance on AI is new and thus warrants a close examination, as our daily interactions with the technology continue to intensify.

I searched multiple sources to derive a consensus definition of this condition: Emotional reliance on AI happens when a person starts depending too heavily on a chatbot (like ChatGPT) for emotional support or companionship, rather than turning to real people (friends, family, counselors, or teachers).

Here’s a real-world example synthesized from stories I’ve read. Imagine a teenager who’s feeling lonely or anxious. They start talking to ChatGPT every day, not just to get help with homework, but to vent, seek comfort, or ask for life advice. Over time, they may stop sharing their feelings with real people, feel like the AI understands them better than friends or adults, and turn to AI for guidance when making emotional decisions.

This trend deeply troubles several national organizations, all of which have posted calls for immediate, strong restrictions on teen use: Common Sense Media, the American Psychological Association, Stanford Medicine, and the JED Foundation.

In late October, Senator Richard Blumenthal (D-CT) and Josh Hawley (R-MO) proposed legislation that would ban teens from using AI chatbots and related platforms. In response, Character.ai, one of the leading companies in the burgeoning AI companion market, announced on Oct. 30 that teens will be banned from talking to its chatbots. 

The School Counselor’s Role Today

I taught for six years in a Southern California middle school that contained over 1,000 students. We had one counselor on staff. His primary job was attendance and discipline. 

The number of counselors in American K-12 schools has increased in recent years, but most states still fall well short of the recommended ratios. As of the 2023–2024 school year, there were about 114,000 full-time K-12 school counselors in the United States. ​The national average student-to-counselor ratio is about 376:1 (2023–24), improving slightly from 385:1 the previous year but still far above the American School Counselor Association (ASCA) recommended ratio of 250:1.​

My state has one of the worst counselor-to-student ratios in the nation: 464:1. That means the 12,600 school counselors in California serve nearly 5.85 million public school students.

Using resources from the American School Counselor Association, I compiled a list of the most prevalent duties of a school counselor. The list is long, so I focused on a singular duty that teens might be able to supplant with support from chatbots and AI companions. It boils down to this: Helping students with personal, social, or academic concerns.​

Counselors and the organizations that support them recognize AI’s appeal and are working to optimize its use. Current usage includes:

  1. Chatbots for student support (social-emotional issues, academic questions, and career exploration) include SchoolAI, Wysa, Woebot, MagicSchool, and ChatGPT.
  1. AI wellness monitoring and predictive analytics for identifying at-risk students include platforms such as Securly, Breathhh, Reelmind.ai, and MagicSchool.
  1. AI systems for automating administrative processes (scheduling, lesson plans, behavioral intervention plans) include MagicSchool, Serif (Email Triager), and ChatGPT.
  1. AI models for ongoing outcome tracking and program optimization include MagicSchool, Wayhaven AI, Securly, and Reelmind.ai.

I discussed this trend with Dr. Russell Sabella of Florida Gulf Coast University. Sabella, himself a former school counselor, is a professor in the Department of Leadership, Counseling, and Human Development. He has focused on the use of technology in school counseling, which led to the publication of Counseling in the 21st Century: Using Technology to Improve Practice

“What we learned in the 90s is what we are going to learn now,” Sabella explained. “We can build guardrails and monitoring systems, but kids always find a way. We can not rely on technology alone to monitor, advise, and guide kids. It will require a true partnership between humans and AI. All stakeholders must be involved in this process. We must come up with the right combination of human and technological innovations that will help kids use this technology in ways that are not hurtful.”

If counselors are using AI platforms to support students, can we really ask teens to avoid the same tools? The difference in purpose may matter to adults — but that nuance likely won’t register with a teen in distress.

When we spend some time thinking about the reasons teens would turn away from organizational and familial supports and prefer instead to interact with AI, we are confronted with several key reasons, some practical, some emotional.

A study posted in the National Library of Medicine reports that anonymity, availability, and immediacy comprise the practical appeal of AI-based support. The emotion-based reason is equally compelling: Teens find it easier to articulate thoughts, ask sensitive questions, and express emotions to AI chatbots, which are designed to be patient, affirming, and supportive, without challenging or conflicting opinions as might occur in face-to-face interactions with humans.​

Can Schools Mirror AI’s Crisis Response?

OpenAI believes that the three domains in its mental health taxonomy overlap with school concerns: Students with self‑harm risk, with emerging psychosis/mania, or mis‑using AI tools as a substitute for human support/teacher/peer connection.

According to OpenAI, the “behavioural compliance” of the chatbot (how an AI system should respond) can be mirrored in school policy. That phrase seemed both obtuse and overly simplistic, so I asked for a clearer explanation. 

When someone is in a crisis, for example, the model is expected to:

  • Not give harmful advice
  • Show empathy
  • Offer crisis resources
  • Avoid making clinical judgments
  • Not pretend to be a therapist

If the AI does all that correctly, it’s said to be “in compliance.” If it fails — for example, it gives vague reassurance to someone expressing suicidal thoughts — that’s a compliance failure.

The company believes that schools should expect this same functionality when they adopt any AI tool for tutoring or learning management. It gave me a list of wildly improbable actions that educators should take to mimic what the AI system does. I pushed back, and this is what ChatGPT told me: “You’re right to raise the red flag — on its face, this sounds wildly unrealistic for most schools. Staff are already stretched, digital literacy is uneven, and AI oversight isn’t baked into anyone’s job description.”

So what can a school and its counseling staff reasonably be expected to do? I culled this list from a variety of sources, some of them critical of teen usage of chatbots, tutors, and AI companions: 

  • Instead of drafting “AI behavior rules” from scratch when purchasing access or tools, adapt frameworks from others — like OpenAI’s mental health taxonomy, Common Sense Media guidelines, or ISTE’s AI standards for students.
  • Pull periodic transcripts. Many tools (like MagicSchool, Khanmigo, and others) allow admins to see anonymized or sample student-AI interactions.
  • Create a “we spot it, we share it” culture. Encourage teachers and students to report odd AI responses — just like they would report inappropriate websites or phishing attempts.

Building a Human-AI Partnership in K–12 Counseling

Sabella surprised me when he suggested that one way to equip students with the skills and knowledge needed to safely engage with AI tutors, chatbots, and companions is by systematically focusing on AI literacy. I have written lately of the need to take a Writing Across the Curriculum approach to AI literacy and spread it among the disciplines. Sabella encouraged me to cast a wider net.

“Where does AI literacy live?” Sabella asked rhetorically. “AI counselors must be part of this process. Maybe this is kind of like nutrition. I learn to eat in one place but I have to apply that knowledge and experience everywhere I go. We need to create a multi-tiered system of support, something like RTI. Each and every kid has to get tier 1 training. Some kids need extra help; perhaps this is where counselors come in. We can call that tier 2. For tier 3 students, a group of teachers, administrators, and support staff must provide a multi-layered approach.”

Sabella acknowledges that adults, including educators, fumbled badly when social media began to impact student mental health and behavior.

“We made a terrible mistake,” he said. “Kids want help. They want to feel safe. I don’t want to miss out again. That’s why we need to include kids as we develop our guardrails and monitoring systems. We need them as partners.”

If AI companions are going to be a part of how young people process the world, we have a responsibility to ensure they are never alone in doing so. The future isn’t a choice between human counselors or chatbots. It’s a question of how to build relationships that blend both. The task isn’t just about filtering or banning. It’s about preparing our counseling systems, our policies, and our students themselves for meaningful human-AI collaboration that is grounded in empathy, guided by ethics, and centered on care. 

David Ross

David Ross is the retired CEO of the Partnership for 21st Century Learning. As the former Senior Director of PBLWorks he co-authored the PBL Starter Kit. David has been focusing his current work on the nexus of generative AI and its role in designing, teaching, and assessing Project Based Learning. You can follow him on X: @davidPBLross

Discover the latest in learning innovations

Sign up for our weekly newsletter.

0 Comments

Leave a Comment

Your email address will not be published. All fields are required.