Writing, Rhetoric, and AI

Steven D. Krause | Winter 2026 | Eastern Michigan University

About | Course Materials | Create Post

Author: Shane

  • Lee V. Gaines, “To keep AI out of her classroom, this high school English teacher went analog”

    Gaines, Lee V. “To keep AI out of her classroom, this high school English teacher went analog.” NPR, 28 Jan. 2026, https://www.npr.org/2026/01/28/nx-s1-5631779/ai-schools-teachers-students. Accessed 28 Jan. 2026. 

    Gaines provides readers with a timely account of Chanea Bond, a high school English teacher in Fort Worth, Texas, and her efforts to limit student AI use in her courses. In addition to detailing Bond’s reasoning behind the move, which primarily centers on a desire to build authentic critical thinking capabilities in her students, Gaines also provides a summary of the methods being used. These include mandated handwritten assignments, daily writing to develop student voice, and more frequent feedback throughout longer writing processes. The article also gathers feedback from students, several of which feel positively toward their teacher’s decision. Gaines also offers a counter perspective by discussing how other teachers, districts, and even government entities are implementing or encouraging AI use in education. 

    As AI and the debate around it becomes more pressing in society at large and in school settings in particular, I find stories like this important for framing the argument and describing the perspectives and approaches that people are taking toward it. Because it centers the real, lived experiences of students and teachers, the article puts a human face on what can feel at times like an abstract issue. Questions about how AI should be implemented in educational environments need to involve consultation with the experts who are directly engaging in the day to day work.

  • Kate Conger, “California Investigates Elon Musk’s xAI Over Sexualized Images”

    Conger, Kate. “California Investigates Elon Musk’s xAI Over Sexualized Images.” The New York Times, 14 Jan. 2026, https://www.nytimes.com/2026/01/14/technology/grok-ai-x-investigation-california.html. Accessed 19 Jan. 2026.

    In Conger’s article, she reports on the recent concern over the AI chatbot Grok and how it has enabled users on the X social media platform to create non consensual sexualized pictures of real people (predominantly, it seems, of women and children). Rob Banta, the attorney general for the state of California, is investigating whether xAI (founded by its CEO Elon Musk) is in violation of state law. Conger details similar investigations in other countries and outlines the penalties California may impose. She also makes note of comments from Musk and xAI official statements that claim there are internal limits, regulations, and policies in place to prevent Grok from creating “illegal” content.

    I find the situation described in this article interesting as it outlines the tensions between the power of AI companies, the rapid growth in the capabilities of AI tools, and the void in established law to regulate these tools. While Conger maintains an overall objective tone in her piece, the reporting opens up the question of where blame lies when AI is used to produce illegal content. Does the fault lie with the creator or the user? I feel that as issues like this emerge, they point us to a growing need for both institutional and legal regulation to prevent unethical AI use.