Writing, Rhetoric, and AI

Steven D. Krause | Winter 2026 | Eastern Michigan University

About | Course Materials | Readings | Create Post

  • Keshavan, Matcheri, John Torous, and Walid Yassin. “Do Generative AI Chatbots Increase Psychosis Risk?”

    Matcheri Keshavan 1John Torous 1Walid Yassin 1 World Psychiatry, vol. 25, no. 1, Jan. 2026, pp. 150–151. PMC, https://pmc.ncbi.nlm.nih.gov/articles/PMC12805049/.

    In this article, Keshavan, Torous, and Yassin discuss the growing use of generative AI chatbots in mental health. They question how safe these tools are for people vulnerable to psychosis. While some research does suggest chatbots can help with anxiety and depression, these authors are arguing that most studies do not account for serious mental illness. They also explain how AI chatbots may worsen symptoms by reinforcing users’ false beliefs and encouraging isolation.


    I think this article is an important read because it pushes back against overly positive narratives surrounding AI in mental health. It shows why these tools should not be treated as universally helpful, especially for people who are already vulnerable.

  • Kate Conger, “California Investigates Elon Musk’s xAI Over Sexualized Images”

    Conger, Kate. “California Investigates Elon Musk’s xAI Over Sexualized Images.” The New York Times, 14 Jan. 2026, https://www.nytimes.com/2026/01/14/technology/grok-ai-x-investigation-california.html. Accessed 19 Jan. 2026.

    In Conger’s article, she reports on the recent concern over the AI chatbot Grok and how it has enabled users on the X social media platform to create non consensual sexualized pictures of real people (predominantly, it seems, of women and children). Rob Banta, the attorney general for the state of California, is investigating whether xAI (founded by its CEO Elon Musk) is in violation of state law. Conger details similar investigations in other countries and outlines the penalties California may impose. She also makes note of comments from Musk and xAI official statements that claim there are internal limits, regulations, and policies in place to prevent Grok from creating “illegal” content.

    I find the situation described in this article interesting as it outlines the tensions between the power of AI companies, the rapid growth in the capabilities of AI tools, and the void in established law to regulate these tools. While Conger maintains an overall objective tone in her piece, the reporting opens up the question of where blame lies when AI is used to produce illegal content. Does the fault lie with the creator or the user? I feel that as issues like this emerge, they point us to a growing need for both institutional and legal regulation to prevent unethical AI use.

  • Hua Hsu, “The End of the Essay”

    Hsu, Hua. “The End of the Essay.” The New Yorker, July 7 & 14, 2025, 21-27. https://www.newyorker.com/magazine/2025/07/07/the-end-of-the-english-paper

    Hsu writes about the ways in which college students are using AI to complete their assignments, and also about the ways in which college professors and administrators are responding. His essay is based on many interviews with students and faculty, along with other research on the implications of AI and the future of education. The link is to the online version of the essay, which had the title “What Happens After A.I. Destroys College Writing?”

    I think this essay does a good job of addressing the complexities, anxieties, and emotions of both students and educators about AI. Hsu does not argue “for” or “against” AI per se, though he does suggest higher education needs to reform as a result of AI. Ultimately, I think the title in both the print and online versions don’t accurately reflect Hsu’s views: that is, I don’t think Hsu thinks AI is going to “end” or “destroy” writing.