05/01/2026

Gatekeeping Student Usage of AI in Career Counseling Courses

By Galaxina G. Wright, Diana Charnley, and Neslihan Caliskan

Getting your Trinity Audio player ready...

Gatekeeping is a key obligation to ensure counselors-in-training (CITs) are best prepared for the counseling profession (Rapp et al., 2018), and there are a wide range of factors to consider when addressing the professional competencies and dispositions of CITs. One of the most recent concerns is incorporation of artificial intelligence (AI) into academic work (Gonzalez & McCalla, 2025). While the core role of gatekeeping is for counselor educators to take a proactive approach to handling challenges and professional dispositions of CITs, there is a limited amount of research, understanding, and guidance about how counselor educators should effectively address AI usage in academic and professional contexts (Crofford et al., 2026). This article presents common AI gatekeeping issues and best practices to address them.

Istock 2186672831 Credit Mediaphotos

Common Issues with Gatekeeping AI

One of the primary challenges for counselor educators is determining when AI is appropriately aiding students in their work versus when it is effectively completing the work for them. Opinions on AI use vary widely. Some proponents view AI as a beneficial tool for tasks such as generating project outlines, brainstorming research ideas, or refining grammar and syntax. Others, however, express concern about its use, particularly regarding students’ ability to recognize appropriate boundaries when incorporating AI into their assignments (Swidan et al., 2025).

A significant concern among those who are more apprehensive is that academic dishonesty has become easier than ever, largely due to the rapid expansion of AI platforms and technological advancements (Oravec, 2023). Supporting this concern, a recent survey of college students in the United States found that approximately one in three students reported completing a written assignment with the assistance of an AI chatbot (Chan, 2023). Additionally, more than half of the participants indicated that they used AI chatbots for most of their assignments throughout the semester (Chan, 2023).

Further complicating this issue is the growing uncertainty surrounding the reliability of AI-detection tools. There have been several documented instances in which plagiarism detection platforms have falsely accused students of using AI. According to the New York Times (Metz, 2025), tools such as Turn It In include AI-detection features that can generate false positives, incorrectly identifying human-written work as AI-generated. This creates a difficult situation for faculty, who are tasked with upholding academic integrity while having limited reliable resources to support this gatekeeping responsibility.

Another challenge counselor educators face involves helping students understand the limitations related to data security and data processing within AI platforms (Swidan et al., 2025). There is increasing concern that these platforms collect user data without transparency regarding how or where that information is stored and used. This poses potential risks, particularly when students input sensitive information, such as client-related material. In addition to concerns about data input, there are also issues related to AI-generated outputs. Reports have highlighted instances in which AI tools produce inaccurate information, including fabricated sources, incorrect citations, and unreliable client assessments (Metz & Weise, 2025; Pierre, 2025).

Finally, one of the most complex challenges for both counselor educators and students is navigating institutional perspectives and policies regarding AI use. Just as individual opinions about AI vary, university guidelines are often inconsistent or unclear (College Board, 2026). Many students may have previously been encouraged—or even expected—to use AI in academic, professional, or personal contexts, which can further blur expectations around appropriate use in their current academic environment (Legatt, 2025; Marken, 2026). Individual perceptions of AI can significantly influence one’s level of engagement with and understanding of these tools and platforms.

Best Practices and Strategies

To help counselor educators choose an ethical path forward, the authors recommend they engage in their own exploration and evaluation of AI platforms while reflecting on personal values, biases, and interpretations related to AI. Some examples include utilizing the Multicultural and Social Justice Counseling Competencies [MSJCC] (Ratts et al., 2016) or the Digital Education Council AI Readiness Framework (Digital Education Council, 2024).

Further, it is recommended counselor educators become familiar with and understand the university’s policies and stance on AI usage. Institutional perspectives on AI currently exist along a spectrum; some universities strongly discourage its use due to concerns about academic integrity, while others view AI as an emerging and integral part of the future and are actively embracing its rapid development (Swidan et al., 2025).

  • When instituting best practices, we offer the following potential strategies for ethical student engagement and instructor gatekeeping related to AI:
  • First, speak with the student about their AI use and motivations, emphasizing your course and university policies on AI.
  • If AI is suspected in submitted work, utilize two to three AI detection tools (i.e., ZeroGPT by ChatGPT) to triangulate the likelihood and percentage of AI use.
  • Refer students to appropriate resources that correspond with their reason for AI use (i.e., APA style guides, writing centers, offices for students with disabilities, university resources on AI, reputable public resources on AI).
  • Consider integrating class activities or assignments where you teach students how to use AI and explicitly discuss if and how they might be allowed or expected to use it for your course (i.e., assignments about teaching clients how to use AI for career development, or the use of AI in developing resumes or cover letters).
  • Review relevant ethical and legal standards for Protected Health Information that using AI may violate.
  • Model appropriate AI use and citation in your own course materials.
  • Review the potential for AI to hallucinate sources and explore strategies to prevent or address hallucinated references (i.e., read all references provided to corroborate).
  • Share examples of appropriate and/or inappropriate AI use in assignments and discuss what makes them so, including a review of crafting strong AI prompts.
  • Use real or simulated career counseling case studies that incorporate AI within interventions and engage students in a discussion of their perspectives on its application, effectiveness, and ethical implications.
  • Integrate your expectations for AI use and any policies into your program orientation and courses.
  • For in-person gatekeeping scenarios, consider having the student complete an assignment without the use of technology, such as discussing a provided case study with faculty using only hard copy resources (i.e., a print version of the Diagnostic and Statistical Manual, DSM; a theories textbook), to assess comprehension of material.
  • In online gatekeeping scenarios, university programs might invest in more proctored examinations or proctoring software to evaluate learning in each course.
  • If necessary, create a policy or protocol on AI use for both faculty and students in consultation with your institution’s administration

The Future of Gatekeeping and AI in Counselor Education

AI is quickly becoming ubiquitous across many professions and everyday tasks in society and that includes counseling and counselor education. The efficacy of gatekeeping for the profession has already been affected by the emergence of AI, and it is imperative that counselor educators take a proactive approach to the use of AI in counseling and in the classroom. To guide CITs in becoming ethical and informed consumers of AI, we as counselor educators must also become so.

 

References

Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International journal of educational technology in higher education, 20(1), 38. https://doi.org/10.1186/s41239-023-00408-3

College Board. (2026, February 25). New College Board research: Faculty express near-universal concern that student AI use undermines original writing and critical thinking. Newsroom. https://newsroom.collegeboard.org/new-college-board-research-faculty-express-near-universal-concern-student-ai-use-undermines

Crofford, H., Bor, E., & Kemer, G. (2026). Counseling professionals’ perspectives on AI integration in education and supervision: A concept mapping study. Counselor Education and Supervision. https://doi.org/10.1002/ceas.70016

Digital Education Council. (2024, August 30). How students use AI: The evolving relationship between AI and higher education. Digital Education Council. https://www.digitaleducationcouncil.com/post/how-students-use-ai-the-evolving-relationship-between-ai-and-higher-education

Gonzalez Jr, L., & McCalla, D. (2025). Counselor-in-training perceptions of artificial intelligence: Implications for counselor education. Journal of Counselor Preparation and Supervision, 19(3), 9. http://dx.doi.org/10.70013/qmr6nxlt

Legatt, A. (2025, September 18). 90% of college students use AI: Higher ed needs AI fluency support now. Forbes. https://www.forbes.com/sites/avivalegatt/2025/09/18/90-of-college-students-use-ai-higher-ed-needs-ai-fluency-support-now/.

Marken, S. (2026, April 2). AI is routine for college students, despite campus limits. Gallup. https://news.gallup.com/poll/704090/routine-college-students-despite-campus-limits.aspx

Metz, C. (2025, May 17). A new headache for honest students: Proving they didn’t use A.I. The New York Times. https://www.nytimes.com/2025/05/17/style/ai-chatgpt-turnitin-students-cheating.html

Metz, C., & Weise, K. (2025, May 5). A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse. The New York Times. https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html

Oravec, J. A. (2023). Artificial intelligence implications for academic cheating: Expanding the dimensions of responsible human-AI collaboration with ChatGPT. Journal of Interactive Learning Research, 34(2), 213-237. https://doi.org/10.70725/304731gmmvhw

Pierre, J. M. (2025, July 24). AI hallucinations in medicine and mental health. Counseling Today. https://www.psychologytoday.com/us/blog/psych-unseen/202506/ai-hallucinations-in-medicine-and-mental-health

Rapp, M. C., Moody, S. J., & Stewart, L. A. (2018). Becoming a gatekeeper: Recommendations for preparing doctoral students in counselor education. Professional Counselor, 8(2), 190-199. https://doi.org/10.15241/mcr.8.2.190

Ratts, M. J., Singh, A. A., Nassar-McMillan, S., Butler, S. K., & McCullough, J. R. (2016). Multicultural and Social Justice Counseling Competencies: Guidelines for the Counseling Profession. Journal of Multicultural Counseling and Development, 44(1), 28–48. https://doi.org/10.1002/jmcd.12035

Swidan, A., Lee, S. Y., & Romdhane, S. B. (2025). College students’ use and perceptions of AI tools in the UAE: Motivations, ethical concerns and institutional guidelines. Education Sciences, 15(4), 461. https://doi.org/10.3390/educsci15040461

 



 

Galaxina Wright 2024Galaxina G. Wright, PhD, is an Assistant Professor in the College of Psychology at Nova Southeastern University (NSU). She is a Licensed Mental Health Counselor (LMHC; FL) and National Certified Counselor (NCC). She currently provides career counseling services in a Florida private practice, primarily serving clients who face toxic workplace environments and employment maltreatment. Dr. Wright also serves as the NCDA Trustee for Counselor Educators and Researchers and is the Past President of the Florida Career Development Association (FCDA). She may be reached at galaxinawright@gmail.com.

 

 

Diana Charnley

Diana Charnley, PhD, LMHC (WA), LPC (MI), ACS, BC-TMH, NCC, completed her master’s degree in clinical mental health counseling and her doctorate in counselor education and supervision. Her research, including her dissertation, is focused on gatekeeping and training in counselor education. She is a licensed in both Michigan and Washington, is an Approved Clinical Supervisor, Board Certified Telemental Health, and National Certified Counselor, has taught and supervised at several universities across the U.S., and serves as a CACREP Liaison and Site Visitor. She is currently a core faculty member at City University of Seattle, where she teaches the career development course, and maintains a small telehealth practice, including career counseling. 

 

Neslihan CaliskanNeslihan Caliskan, B.A. is a second-year graduate student in the college of psychology at nova Southeastern University, where she is pursuing a master’s degree in Clinical Mental Health Counseling. She earned her bachelor’s degree with honors in Psychology from the University of South Florida. Neslihan’s clinical interests include individual and group therapy, as well as career counseling and education, particularly for college students from diverse and disadvantage backgrounds. She is passionate about supporting individuals in navigating the personal academic and professional development. Throughout her academic journey, Neslihan helped multiple leadership roles, including serving as the Florida Career Development Association (FCDA) Social Media Chair and event coordinator in national and international organizations.

Printer-Friendly Version

0 Comments