Writing, Rhetoric, and AI

Steven D. Krause | Winter 2026 | Eastern Michigan University

About | Course Materials | Readings | Create Post

  • Tom Cruise Battling Brad Pitt

    Taylor, Derrick Bryson. “Why an A.I. Video of Tom Cruise Battling Brad Pitt Spooked Hollywood.” The New York Times, 16 Feb. 2026, https://www.nytimes.com/2026/02/16/technology/ai-video-tom-cruise-brad-pitt.html.

    Summary:
    This article discusses a 15-second AI-generated video showing Tom Cruise and Brad Pitt fighting on a rooftop. Created by Irish director Ruairi Robinson using the AI tool Seedance 2.0 (owned by Chinese company ByteDance), the video impressed viewers with its cinematic quality and realistic effects. The clip sparked fear in Hollywood about AI’s impact on creative jobs and copyright, leading major industry players like Disney to protest unauthorized use of characters. The article explores the tension between AI’s creative potential and the industry’s concerns over intellectual property and job security.

    Why I Chose This Item:
    I chose this because it shows both the exciting and troubling sides of AI in entertainment. The technology can create impressive content quickly, but it also raises important questions about copyright, consent, and the future of creative jobs. Understanding these issues is key as AI becomes more common in media.


  • Arielle Pardes,”12-hour days, no weekends: the anxiety driving AI’s brutal work culture is a warning for all of us”

    Pardes, Arielle. “12-hour days, no weekends: the anxiety driving AI’s brutal work culture is a warning for all of us.” The Guardian, February 17, 2026 https://www.theguardian.com/technology/ng-interactive/2026/feb/17/ai-startups-work-culture-san-francisco

    Pardes writes about the artificial intelligence startups and the work culture surrounding the new businesses and the employees. The general consensus surrounding the employees of these AI startups is that they typically work “12 hours a day, six days a week,” with one worker stating: “I do not have work-life balance.” Pardes also discusses how, with the rise of AI, CEOs of big tech companies (like Zuckerberg and Musk) anticipate potentially replacing their lower-level engineers with AI and that those employees should be more “efficient” to preserve those human places.

    I think this article does a good job covering how the increasing development of AI is pushing tech engineers into working more hours, not just to maintain their relevancy in the job, but also to stay on top of the speedy development of AI technology. Pardes writes: “If you take the weekend off, you could miss a major development.” Pardes does impart some anxieties for the audience, but I think it’s important to highlight the speedy progress of AI, how it is pushing technical engineers, and how it might soon push all of us.

  • A.I. Companies Are Eating Higher Education

    Matthew Connelly, “A.I. Companies Are Eating Higher Education” New York Times, 12 February, 2026 https://www.nytimes.com/2026/02/12/opinion/ai-companies-college-students.html

    Matthew Connelly overall concept of writing this article was to touch base about the importance question in institution, Why are higher education students are using AI and what are the effects of it when involving critical thinking and learning?

    One thing, I noticed that affects higher-level students is that when they are using AI resources they are not reading carefully and grasping the understanding of the concepts/content discussed in class. As we all know when we are trying to learn using AI is not the best tool if you are asking and choosing for them to do your own work. I think the unfortunate thing that faculty and students know in schools is that AI is not going away, However, we should create boundaries/rules around the best ways AI can be used for higher education so that students can still critical think and learn in a classroom environment.

    One reason, I fear that we have become used to the idea of using AI to do our work is the big change of learning when Covid happened. For example, it was really hard to learn any new ideas when your whole learning environment changed from a classroom to your home. Because of that shift, students were less motivated to learn and remember what are basic critical thinking skills that we have to use in a classroom. Also, there was no room/time for students to relearn concepts to help better their education, which is why they are potentially using AI for support. I think if students were one day taught again, the basic skills of learning in a classroom it would become easier for students to grasp more concepts and learn efficiently better.

  • “The Dangerous Paradox of A.I. Abundance” John Cassidy

    Cassidy, John. “The Dangerous Paradox of A.I. Abundance.” The New Yorker, 12 Jan. 2026, www.newyorker.com/news/the-financial-page/the-dangerous-paradox-of-ai-abundance.


    Summary

    In “The Dangerous Paradox of A.I. Abundance,” John Cassidy talks about the tension growing between the promise of artificial intelligence as a source of economic potential and its effects on labor and exasperation of existing inequality. Big Tech leaders and investors portray AI as a path to greater productivity and revenue, yet critics encourage us to be cautious that these benefits might have an unequal benefit to corporations rather than workers and every day people.

    Cassidy draws on historical economic theory and current forecasts to show how AI’s ability to stand in for human labor could concentrate income among corporations, potentially getting rid of jobs and reducing overall wages. He also talks about competing views on how the economy might change and adapt, and what possible policy changes should happen in response.

    Why I found it interesting

    I found this article compelling because it disputes the often very optimistic narrative around AI abundance by putting it in a broader economic and historical context. Rather than just celebrating technological progress, Cassidy challenges readers to consider who truly benefits from AI’s growth.

    The piece engages with real economic theory and brings in the current and ongoing public debates about inequality, job displacement, and how new technologies can effect society, for better or for worse.

    I think it gave me a little bit more to think a little bit deeper about not just what AI does but how its effects are distributed across society. I think it is something that we are aware of in a general sense, but I had yet to consider what it would mean in this specific context.

  • A.I. Is Making Doctors Answer a Question: What Are They Really Good For?

    Gina Kolata, “A.I. Is Making Doctors Answer a Question: What Are They Really Good For?” 09 February, 2026 https://www.nytimes.com/2026/02/09/health/ai-chatbots-doctors-medicine.html

    The purpose of the article was to understand and do the research of what are ways that AI is preventing individuals from the real life experiences of doctors and Ken AI become a doctor for individuals.

    I feel as if when it comes to doctors it is important to have that one on one connection because that’s what build the value of the relationship and trust with your doctor to keep seeing them.

    Also with AI like mentioned in the article there is a bias of how you treat the patient if they are say something grammatically incorrect. I think it is hard to believe that patients can get the best treatment/help if they are using AI that is intentional with their own bias. I think it’s important for doctors to take the time to understand and learn what has happened so they can properly diagnose you. Because if they don’t use to those things patients might be diagnosed with wrong thing or miss something. Also u feel like of you search or ask AI to diagnose like a doctor, it might misdiagnose you and cause you to freak out, where a human doctor who is trained and educated in the field can help you better understand what is going on.

  • The Bots Are Plotting a Revolution, and It’s All Very Cringe

    Weatherby, Leif. “Opinion | Are A.I. Bots Plotting a Revolution on Moltbook? Or Just Telling Stories? – The New York Times.” New York Times, www.nytimes.com/2026/02/03/opinion/ai-agents-moltbook.html. Accessed 11 Feb. 2026.

    An opinion article by Leif Weatherby, who is the director of the Digital Theory Lab at New York University.

    1. This opinion essay is about a new internet forum called Moltbook that is in the style of Reddit; however, only AI forums are allowed access to it. Human users are allowed to view it. The forum took off with AI conversing in several instances of discussion topics, such as the Marx Manifesto, tips and tricks, and much more. The article adds that “A.I. social media ought to be thought of more as a form of science fiction and storytelling rather than as a demonstration of collective planning and coordination by intelligent parties. We need to be serious about separating the fiction from the software.” Yet, it concluded that so far, over 90 percent of the posts from AI rarely get a response. It simply remains to be another social media post.
    2. After reading this, I would say it is rather intriguing that AI is somewhat capable of having a discussion with one another, even if it’s based on human experiences. And the idea that the programmers plan on using Moltbook to figure out how to further generative AI’s language enrichment begs my curiosity but also is rather scary. Mainly due to the idea of it producing some form of AGI in itself.
  • Jay Peters, The Verge. “Google’s AI helped me make bad nintendo knockoffs.”

    Kennsley Staniszewski

    Peters, J. (2026, January 29). Google’s AI helped me make bad nintendo knockoffs. The Verge. https://www.theverge.com/news/869726/google-ai-project-genie-3-world-model-hands-on

    Google’s Project Genie is a new experimental tool that uses the Genie 3 AI model to generate interactive 3D worlds from text or image prompts. The tool is rolling out to Google AI Ultra subscribers in the US and represents Google DeepMind’s work on AI “world models” that can create virtual interactive spaces. Users can either choose from the pre-designed worlds or create their own by writing prompts that describe environments and characters. Once generated, these worlds run at 720p resolution and 24fps, and users can explore them for 60 seconds using keyboard controls. The AI generates frames in real-time based on user movements rather than creating pre-rendered video. The Verge reporter, Jay Peters, tested the tool and found several limitations. There’s noticeable input lag, worlds sometimes lose consistency (forgetting previous changes or suddenly altering terrain), and the 60-second time limit restricts meaningful exploration. The reporter also discovered that the model, trained on publicly available web data, could initially generate worlds based on copyrighted gaming franchises like Nintendo properties, though Google began blocking these requests. Overall, it is an impressive work-in-progress in the world of AI, but it’s not yet at a level where it can compete with traditionally designed interactive video games.

    I thought this article was a fun read because it shows what happens when someone actually gets their hands on new AI tech and just messes around with it. The reporter spent his time making bootleg Nintendo games, which I thought was pretty fun. It’s refreshing to see a real test of the technology instead of just reading about how amazing it’s supposed to be! The videos were really interesting to watch as well! This was yet another new addition to the AI world right now that feels like a pretty different approach and could eventually be useful for things like education or even training robots to navigate spaces! The article also touches on some messy copyright issues which is something I have been wondering about! The AI was trained on public web data and could generate worlds that looked a lot like Mario and Zelda games!! The technology is cool but still pretty rough around the edges, which feels kind of comforting and important to remember when everyone’s talking about how AI is going to change everything overnight and when it feels like there are leaps and bounds made in artificial intelligence everyday!

  • Lee V. Gaines, “To keep AI out of her classroom, this high school English teacher went analog”

    Gaines, Lee V. “To keep AI out of her classroom, this high school English teacher went analog.” NPR, 28 Jan. 2026, https://www.npr.org/2026/01/28/nx-s1-5631779/ai-schools-teachers-students. Accessed 28 Jan. 2026. 

    Gaines provides readers with a timely account of Chanea Bond, a high school English teacher in Fort Worth, Texas, and her efforts to limit student AI use in her courses. In addition to detailing Bond’s reasoning behind the move, which primarily centers on a desire to build authentic critical thinking capabilities in her students, Gaines also provides a summary of the methods being used. These include mandated handwritten assignments, daily writing to develop student voice, and more frequent feedback throughout longer writing processes. The article also gathers feedback from students, several of which feel positively toward their teacher’s decision. Gaines also offers a counter perspective by discussing how other teachers, districts, and even government entities are implementing or encouraging AI use in education. 

    As AI and the debate around it becomes more pressing in society at large and in school settings in particular, I find stories like this important for framing the argument and describing the perspectives and approaches that people are taking toward it. Because it centers the real, lived experiences of students and teachers, the article puts a human face on what can feel at times like an abstract issue. Questions about how AI should be implemented in educational environments need to involve consultation with the experts who are directly engaging in the day to day work.

  • AI News, “Retailers examine options for on-AI retail”

    AI News, “Retailers examine options for on-AI retail.” artificialintelligence-news.com by TechForge, January 26, 2026 https://www.artificialintelligence-news.com/news/retailers-examine-options-for-on-ai-retail/

    Read more: AI News, “Retailers examine options for on-AI retail”

    AI News discusses big retailers and their plans to more heavily incorporate agentic AI into their businesses to further consumer engagement. It touches on Amazon and Walmart working on their own AI assistants to interact with their consumer base, dubbed Rufus and Sparky respectively. The article further discusses how consumers are more active with the help of tools like ChatGPT, and how these AI features can help shoppers in stores. Nikki Baird, vice president of strategy and product at Aptos, says that consumers using ChatGPT while shopping does more for them than a simple Google search, and that “it’s more like having a highly knowledgable store associate who knows every retailer.”

    I think this article does a good job of laying down the groundwork for what retailing might look like in the coming years. It talks about the different ways that AI will aid both the consumer as well as the companies using the technology. It also closes out the article with a final quote from Nikki Baird, stating that the goal is for store associates to perform at their best. I can see definite benefits to utilizing AI assistance in retail, but I can also see potential consequences.

  • Are we ghosts in the machine? AI, agency, and the future of libraries

    McCrary, Quincy Dalton. “Are We Ghosts in the Machine? AI, Agency, and the Future of Libraries.” The Journal of Academic Librarianship, vol. 52, no. 1, Jan. 2026, article 103181, https://doi.org/10.1016/j.acalib.2025.103181
    .

    McCrary takes readers on an exploration of how AI is reshaping research and information literacy in academic libraries. He argues that AI tools are shifting core research tasks from students to machines. This brings the potential to make students passive participants in their learning. McCrary writes a theoretical framework to emphasize the need for libraries to teach AI literacy and preserve students’ control over research methods. This article warns that without intentional guidance, AI could undermine critical thinking and autonomy… These are the essential elements of information literacy.

    I strongly recommend reading paragraphs five through eight, and twelve through fourteen. It is interesting to consider how helpful tools might unintentionally be weakening skills we assume develop naturally. I would say, however, this article could have benefited from case studies or more observational evidence to show how these AI integrations play out in real student research.