Writing, Rhetoric, and AI

Steven D. Krause | Winter 2026 | Eastern Michigan University

About | Course Materials | Create Post

Author: Danielle

  • Are we ghosts in the machine? AI, agency, and the future of libraries

    McCrary, Quincy Dalton. “Are We Ghosts in the Machine? AI, Agency, and the Future of Libraries.” The Journal of Academic Librarianship, vol. 52, no. 1, Jan. 2026, article 103181, https://doi.org/10.1016/j.acalib.2025.103181
    .

    McCrary takes readers on an exploration of how AI is reshaping research and information literacy in academic libraries. He argues that AI tools are shifting core research tasks from students to machines. This brings the potential to make students passive participants in their learning. McCrary writes a theoretical framework to emphasize the need for libraries to teach AI literacy and preserve students’ control over research methods. This article warns that without intentional guidance, AI could undermine critical thinking and autonomy… These are the essential elements of information literacy.

    I strongly recommend reading paragraphs five through eight, and twelve through fourteen. It is interesting to consider how helpful tools might unintentionally be weakening skills we assume develop naturally. I would say, however, this article could have benefited from case studies or more observational evidence to show how these AI integrations play out in real student research.

  • Keshavan, Matcheri, John Torous, and Walid Yassin. “Do Generative AI Chatbots Increase Psychosis Risk?”

    Matcheri Keshavan 1John Torous 1Walid Yassin 1 World Psychiatry, vol. 25, no. 1, Jan. 2026, pp. 150–151. PMC, https://pmc.ncbi.nlm.nih.gov/articles/PMC12805049/.

    In this article, Keshavan, Torous, and Yassin discuss the growing use of generative AI chatbots in mental health. They question how safe these tools are for people vulnerable to psychosis. While some research does suggest chatbots can help with anxiety and depression, these authors are arguing that most studies do not account for serious mental illness. They also explain how AI chatbots may worsen symptoms by reinforcing users’ false beliefs and encouraging isolation.


    I think this article is an important read because it pushes back against overly positive narratives surrounding AI in mental health. It shows why these tools should not be treated as universally helpful, especially for people who are already vulnerable.