Writing, Rhetoric, and AI

Steven D. Krause | Winter 2026 | Eastern Michigan University

About | Course Materials | Create Post

Category: Uncategorized

  • Jay Peters, The Verge. “Google’s AI helped me make bad nintendo knockoffs.”

    Kennsley Staniszewski

    Peters, J. (2026, January 29). Google’s AI helped me make bad nintendo knockoffs. The Verge. https://www.theverge.com/news/869726/google-ai-project-genie-3-world-model-hands-on

    Google’s Project Genie is a new experimental tool that uses the Genie 3 AI model to generate interactive 3D worlds from text or image prompts. The tool is rolling out to Google AI Ultra subscribers in the US and represents Google DeepMind’s work on AI “world models” that can create virtual interactive spaces. Users can either choose from the pre-designed worlds or create their own by writing prompts that describe environments and characters. Once generated, these worlds run at 720p resolution and 24fps, and users can explore them for 60 seconds using keyboard controls. The AI generates frames in real-time based on user movements rather than creating pre-rendered video. The Verge reporter, Jay Peters, tested the tool and found several limitations. There’s noticeable input lag, worlds sometimes lose consistency (forgetting previous changes or suddenly altering terrain), and the 60-second time limit restricts meaningful exploration. The reporter also discovered that the model, trained on publicly available web data, could initially generate worlds based on copyrighted gaming franchises like Nintendo properties, though Google began blocking these requests. Overall, it is an impressive work-in-progress in the world of AI, but it’s not yet at a level where it can compete with traditionally designed interactive video games.

    I thought this article was a fun read because it shows what happens when someone actually gets their hands on new AI tech and just messes around with it. The reporter spent his time making bootleg Nintendo games, which I thought was pretty fun. It’s refreshing to see a real test of the technology instead of just reading about how amazing it’s supposed to be! The videos were really interesting to watch as well! This was yet another new addition to the AI world right now that feels like a pretty different approach and could eventually be useful for things like education or even training robots to navigate spaces! The article also touches on some messy copyright issues which is something I have been wondering about! The AI was trained on public web data and could generate worlds that looked a lot like Mario and Zelda games!! The technology is cool but still pretty rough around the edges, which feels kind of comforting and important to remember when everyone’s talking about how AI is going to change everything overnight and when it feels like there are leaps and bounds made in artificial intelligence everyday!

  • Lee V. Gaines, “To keep AI out of her classroom, this high school English teacher went analog”

    Gaines, Lee V. “To keep AI out of her classroom, this high school English teacher went analog.” NPR, 28 Jan. 2026, https://www.npr.org/2026/01/28/nx-s1-5631779/ai-schools-teachers-students. Accessed 28 Jan. 2026. 

    Gaines provides readers with a timely account of Chanea Bond, a high school English teacher in Fort Worth, Texas, and her efforts to limit student AI use in her courses. In addition to detailing Bond’s reasoning behind the move, which primarily centers on a desire to build authentic critical thinking capabilities in her students, Gaines also provides a summary of the methods being used. These include mandated handwritten assignments, daily writing to develop student voice, and more frequent feedback throughout longer writing processes. The article also gathers feedback from students, several of which feel positively toward their teacher’s decision. Gaines also offers a counter perspective by discussing how other teachers, districts, and even government entities are implementing or encouraging AI use in education. 

    As AI and the debate around it becomes more pressing in society at large and in school settings in particular, I find stories like this important for framing the argument and describing the perspectives and approaches that people are taking toward it. Because it centers the real, lived experiences of students and teachers, the article puts a human face on what can feel at times like an abstract issue. Questions about how AI should be implemented in educational environments need to involve consultation with the experts who are directly engaging in the day to day work.

  • Are we ghosts in the machine? AI, agency, and the future of libraries

    McCrary, Quincy Dalton. “Are We Ghosts in the Machine? AI, Agency, and the Future of Libraries.” The Journal of Academic Librarianship, vol. 52, no. 1, Jan. 2026, article 103181, https://doi.org/10.1016/j.acalib.2025.103181
    .

    McCrary takes readers on an exploration of how AI is reshaping research and information literacy in academic libraries. He argues that AI tools are shifting core research tasks from students to machines. This brings the potential to make students passive participants in their learning. McCrary writes a theoretical framework to emphasize the need for libraries to teach AI literacy and preserve students’ control over research methods. This article warns that without intentional guidance, AI could undermine critical thinking and autonomy… These are the essential elements of information literacy.

    I strongly recommend reading paragraphs five through eight, and twelve through fourteen. It is interesting to consider how helpful tools might unintentionally be weakening skills we assume develop naturally. I would say, however, this article could have benefited from case studies or more observational evidence to show how these AI integrations play out in real student research.

  • Keshavan, Matcheri, John Torous, and Walid Yassin. “Do Generative AI Chatbots Increase Psychosis Risk?”

    Matcheri Keshavan 1John Torous 1Walid Yassin 1 World Psychiatry, vol. 25, no. 1, Jan. 2026, pp. 150–151. PMC, https://pmc.ncbi.nlm.nih.gov/articles/PMC12805049/.

    In this article, Keshavan, Torous, and Yassin discuss the growing use of generative AI chatbots in mental health. They question how safe these tools are for people vulnerable to psychosis. While some research does suggest chatbots can help with anxiety and depression, these authors are arguing that most studies do not account for serious mental illness. They also explain how AI chatbots may worsen symptoms by reinforcing users’ false beliefs and encouraging isolation.


    I think this article is an important read because it pushes back against overly positive narratives surrounding AI in mental health. It shows why these tools should not be treated as universally helpful, especially for people who are already vulnerable.

  • Kate Conger, “California Investigates Elon Musk’s xAI Over Sexualized Images”

    Conger, Kate. “California Investigates Elon Musk’s xAI Over Sexualized Images.” The New York Times, 14 Jan. 2026, https://www.nytimes.com/2026/01/14/technology/grok-ai-x-investigation-california.html. Accessed 19 Jan. 2026.

    In Conger’s article, she reports on the recent concern over the AI chatbot Grok and how it has enabled users on the X social media platform to create non consensual sexualized pictures of real people (predominantly, it seems, of women and children). Rob Banta, the attorney general for the state of California, is investigating whether xAI (founded by its CEO Elon Musk) is in violation of state law. Conger details similar investigations in other countries and outlines the penalties California may impose. She also makes note of comments from Musk and xAI official statements that claim there are internal limits, regulations, and policies in place to prevent Grok from creating “illegal” content.

    I find the situation described in this article interesting as it outlines the tensions between the power of AI companies, the rapid growth in the capabilities of AI tools, and the void in established law to regulate these tools. While Conger maintains an overall objective tone in her piece, the reporting opens up the question of where blame lies when AI is used to produce illegal content. Does the fault lie with the creator or the user? I feel that as issues like this emerge, they point us to a growing need for both institutional and legal regulation to prevent unethical AI use.

  • Hua Hsu, “The End of the Essay”

    Hsu, Hua. “The End of the Essay.” The New Yorker, July 7 & 14, 2025, 21-27. https://www.newyorker.com/magazine/2025/07/07/the-end-of-the-english-paper

    Hsu writes about the ways in which college students are using AI to complete their assignments, and also about the ways in which college professors and administrators are responding. His essay is based on many interviews with students and faculty, along with other research on the implications of AI and the future of education. The link is to the online version of the essay, which had the title “What Happens After A.I. Destroys College Writing?”

    I think this essay does a good job of addressing the complexities, anxieties, and emotions of both students and educators about AI. Hsu does not argue “for” or “against” AI per se, though he does suggest higher education needs to reform as a result of AI. Ultimately, I think the title in both the print and online versions don’t accurately reflect Hsu’s views: that is, I don’t think Hsu thinks AI is going to “end” or “destroy” writing.