Writing, Rhetoric, and AI

Steven D. Krause | Winter 2026 | Eastern Michigan University

About | Course Materials | Readings | Create Post

  • AI is enabling robots to assist in surgery. What to know

    Blum, Karen. “Ai Is Enabling Robots to Assist in Surgery. What to Know.” Healthjournalism.Org, Association of Health Care Journalists, 17 Sept. 2025, healthjournalism.org/blog/2025/09/ai-is-enabling-robots-to-assist-in-surgery-what-to-know/.

    Utilizing several peer reviewed research articles, this journal from the AHCJ summarizes, as of it’s publishing, important advancements in robotic surgery due to rapid development of AI. The article details how a fully designed and trained robot surgeon, SRT-H was able to independently and perform a gallbladder surgery without human intervention with complete success. Additionally, more advances are being done in China to create incredibly advanced robotic arms to assist human surgeons in procedures with minimal patient risk.

  • AI Drives New Opportunities and Risks in Space

    Landry Signé, Hanna Dooley, et al. “AI Drives New Opportunities and Risks in Space.” Brookings, 23 Jan. 2026, www.brookings.edu/articles/ai-drives-new-opportunities-and-risks-in-space/.

    This article shows how the advancement in AI has impacted the space market and technological advancements for space exploration. Both positively, in how it has brought more efficient and faster machines to the International Space Station for example, while also bringing some unforeseen challenges like weakening system cybersecurity.

  • The Bots Are Plotting a Revolution, and It’s All Very Cringe


    Weatherby, Leif. “Opinion | Are A.I. Bots Plotting a Revolution on Moltbook? Or Just Telling Stories? – The New York Times.” New York Times, www.nytimes.com/2026/02/03/opinion/ai-agents-moltbook.html. Accessed 11 Feb. 2026.


    This opinion essay is about a new internet forum called Moltbook that is in the style of Reddit; however, only AI forums are allowed access to it. Human users are allowed to view it. The forum took off with AI conversing in several instances of discussion topics, such as the Marx Manifesto, tips and tricks, and much more. The article adds that “A.I. social media ought to be thought of more as a form of science fiction and storytelling rather than as a demonstration of collective planning and coordination by intelligent parties. We need to be serious about separating the fiction from the software.” Yet, it concluded that so far, over 90 percent of the posts from AI rarely get a response. It simply remains to be another social media post.


    After reading this, I would say it is rather intriguing that AI is somewhat capable of having a discussion with one another, even if it’s based on human experiences. And the idea that the programmers plan on using Moltbook to figure out how to further generative AI’s language enrichment, begs my curiosity, but also is rather scary, primarily due to the idea that it could produce some form of AGI in itself.

  • We’re All in a Throuple With A.I

    Miller, Amelia. “Opinion | We’re All in a Throuple with A.I. – The New York Times.” New York Times, https:/www.nytimes.com/2026/02/13/opinion/ai-relationships.html. Accessed 18 Feb. 2026.

    Miller writes about AI and companionship and how using AI for those needs to be met is not in our best interest. It’s mentioned that a good percentage of teens are using AI for their emotional needs, along with the rise of AI therapists and counselors. However, her main opinion here is the people programming AI and the ones resourcing it. She points out that there is work being done for AI to have emotional intelligence, but that the programmers themselves understand and acknowledge that they shouldn’t rely on artificial intelligence for their emotional needs. AI emotional intelligence is being seen as a new marketing opportunity, and one that can potentially drive more users and profit.

    I thought the article was super interesting and gained more insight into the thoughts and minds of the developers themselves. “They support guardrails in theory, but don’t want to compromise the product experience in practice.” And I get it, they are there to make money, and if the company they work for is expecting higher profit margins, they have to comply. But at what costs, you know? And although the AI industry has responded to the negative outcomes and threats, it still doens’t feel like enough. It reminds me of just putting icing on top of a really bad cake. It’ll taste okay at first, but if the cake is bad, it’s still bad.

  • AI toy company Miko adds an AI off switch after political pressure

    NBC News. “AI Toy Company Miko Adds AI Switch Amid Political Pressure.” NBC News, 27 Mar. 2024, https://www.nbcnews.com/tech/security/ai-toy-company-miko-adds-ai-switch-political-pressure-rcna259401.

    Summary:
    Miko, a company that makes AI-powered interactive toys for children, has introduced a new AI on-off switch for its popular Miko 3 and Miko Mini robots. This new parental control option, announced after political scrutiny and a data exposure incident, allows caregivers to disable the toys’ conversational AI features. The company faced criticism when a website was found publicly exposing thousands of AI-generated responses directed at children, raising concerns about child safety and data privacy. Despite Miko’s assurances that no voice recordings were leaked, politicians and watchdogs have expressed continued concern about AI toy security. The situation highlights broader issues with the rapid rise of AI toys, which remain largely unregulated and vulnerable to generating inappropriate content.

    Why this item is important/interesting:
    This article is significant because it illustrates the growing challenges and public concerns around integrating AI into children’s toys, especially regarding privacy and safety. Miko’s decision to add an AI switch reflects increasing political and consumer pressure to give parents more control over AI interactions with their children. It also sheds light on the wider, largely unregulated AI toy market and the technological vulnerabilities that come with AI chatbots. Understanding these developments is crucial as AI becomes more embedded in everyday products, raising questions about ethics, security, and parental oversight.

  • Instacart’s AI-Enabled Pricing Experiments May Be Inflating Your Grocery Bill, CR and Groundwork Collaborative Investigation Finds

    Kravitz, Derek. “Instacart’s AI-Enabled Pricing Experiments May Be Inflating Your Grocery Bill, CR and Groundwork Collaborative Investigation Finds.” Consumer Reports, 9 Dec. 2025, www.consumerreports.org/money/questionable-business-practices/instacart-ai-pricing-experiment-inflating-grocery-bills-a1142182490/. Accessed Feb. 2026.

    In this article from investigative reporter Derek Kravitz, Consumer Reports worked with Groundwork Collaborative and More Perfect Union to investigate variations in pricing from Instacart. This article was interesting to me because I had previously heard rumor of dynamic pricing systems coming into grocery stores like Kroger or Walmart and how that might impact shoppers.

    During their investigation, the researchers had participants use Instacart and add the same specific items to their carts to see if there was any variation in the prices and total cost of groceries from one person to another. The researchers discovered that even with the same exact items, from the same retailers, in the same place and time frame, participants still yielded different totals and prices from Instacart.

    The way this relates to AI, is that AI is being used in developing this dynamic pricing technology that determines how much customers will pay for certain items, based on a number of different details it gathers. “Retailers are now using AI and other technologies to create detailed profiles on their customers, with the potential to personalize prices and discounts down to the individual shopper.”

  • Microsoft CEO warns AI needs to spread beyond Big Tech to avoid bubble

    Halverson, Alex. “Microsoft CEO Nadella’s ‘Telltale Sign’ of AI Bubble.” The Seattle Times, 21 Jan. 2026, www.seattletimes.com/business/microsoft/microsoft-ceo-warns-ai-needs-to-spread-beyond-big-tech-to-avoid-bubble/. Accessed 18 Feb. 2026.

    Halverson’s article highlights quotes from the CEO of Microsoft, Satya Nadella, and Nadella’s thoughts on diffusing AI out of just the tech corporation space.

    The part of Halverson’s article I found most interesting, was this section, where Nadella is addressing the fear workers are having that AI will not just help them, but replace them. “Nadella addressed those fears in a blog post at the end of 2025, in which he argued AI should be thought of as a ‘scaffolding for human potential’ rather than a substitute. He also sneaked into the post that he’d rather people stop arguing about the AI “slop” that’s invading much of the internet.”

    Nadella’s claims in the face of the fears of workers comes off to me as him not really taking those fears seriously because he’s more worried about profits and expanding AI products. Even so, the idea that AI needs to continue to expand to sustain itself and its growth was interesting and something I hadn’t heard much about previously.

  • Tom Cruise Battling Brad Pitt

    Taylor, Derrick Bryson. “Why an A.I. Video of Tom Cruise Battling Brad Pitt Spooked Hollywood.” The New York Times, 16 Feb. 2026, https://www.nytimes.com/2026/02/16/technology/ai-video-tom-cruise-brad-pitt.html.

    Summary:
    This article discusses a 15-second AI-generated video showing Tom Cruise and Brad Pitt fighting on a rooftop. Created by Irish director Ruairi Robinson using the AI tool Seedance 2.0 (owned by Chinese company ByteDance), the video impressed viewers with its cinematic quality and realistic effects. The clip sparked fear in Hollywood about AI’s impact on creative jobs and copyright, leading major industry players like Disney to protest unauthorized use of characters. The article explores the tension between AI’s creative potential and the industry’s concerns over intellectual property and job security.

    Why I Chose This Item:
    I chose this because it shows both the exciting and troubling sides of AI in entertainment. The technology can create impressive content quickly, but it also raises important questions about copyright, consent, and the future of creative jobs. Understanding these issues is key as AI becomes more common in media.


  • Arielle Pardes,”12-hour days, no weekends: the anxiety driving AI’s brutal work culture is a warning for all of us”

    Pardes, Arielle. “12-hour days, no weekends: the anxiety driving AI’s brutal work culture is a warning for all of us.” The Guardian, February 17, 2026 https://www.theguardian.com/technology/ng-interactive/2026/feb/17/ai-startups-work-culture-san-francisco

    Pardes writes about the artificial intelligence startups and the work culture surrounding the new businesses and the employees. The general consensus surrounding the employees of these AI startups is that they typically work “12 hours a day, six days a week,” with one worker stating: “I do not have work-life balance.” Pardes also discusses how, with the rise of AI, CEOs of big tech companies (like Zuckerberg and Musk) anticipate potentially replacing their lower-level engineers with AI and that those employees should be more “efficient” to preserve those human places.

    I think this article does a good job covering how the increasing development of AI is pushing tech engineers into working more hours, not just to maintain their relevancy in the job, but also to stay on top of the speedy development of AI technology. Pardes writes: “If you take the weekend off, you could miss a major development.” Pardes does impart some anxieties for the audience, but I think it’s important to highlight the speedy progress of AI, how it is pushing technical engineers, and how it might soon push all of us.

  • A.I. Companies Are Eating Higher Education

    Matthew Connelly, “A.I. Companies Are Eating Higher Education” New York Times, 12 February, 2026 https://www.nytimes.com/2026/02/12/opinion/ai-companies-college-students.html

    Matthew Connelly overall concept of writing this article was to touch base about the importance question in institution, Why are higher education students are using AI and what are the effects of it when involving critical thinking and learning?

    One thing, I noticed that affects higher-level students is that when they are using AI resources they are not reading carefully and grasping the understanding of the concepts/content discussed in class. As we all know when we are trying to learn using AI is not the best tool if you are asking and choosing for them to do your own work. I think the unfortunate thing that faculty and students know in schools is that AI is not going away, However, we should create boundaries/rules around the best ways AI can be used for higher education so that students can still critical think and learn in a classroom environment.

    One reason, I fear that we have become used to the idea of using AI to do our work is the big change of learning when Covid happened. For example, it was really hard to learn any new ideas when your whole learning environment changed from a classroom to your home. Because of that shift, students were less motivated to learn and remember what are basic critical thinking skills that we have to use in a classroom. Also, there was no room/time for students to relearn concepts to help better their education, which is why they are potentially using AI for support. I think if students were one day taught again, the basic skills of learning in a classroom it would become easier for students to grasp more concepts and learn efficiently better.