Writing, Rhetoric, and AI

Steven D. Krause | Winter 2026 | Eastern Michigan University

About | Course Materials | Readings | Create Post

  • Grammarly’s sloppelganger saga – The Verge

    Bonifield, Stevie. “Grammarly’s Sloppelganger Saga.” The Verge, 5 Apr. 2026, www.theverge.com/column/906606/grammarly-expert-review-ai-saga

    This article by Stevie Bonifield for The Verge is talking about the relatively quick rise and fall of Grammarly’s “Expert Review” feature. Grammarly, which rebranded as Superhuman in late 2025 after acquiring the AI email platform Superhuman Mail, launched Expert Review in August 2025. Expert Review was a feature that generated AI writing suggestions under the names of real academics and authors like Stephen King, Neil deGrasse Tyson, and Carl Sagan and presenting them with a verified-style checkmark icon. None of these individuals gave consent, and the feature only came under scrutiny in March 2026 when Wired reported it was using the names of deceased professors, and Verge reporters discovered their own colleagues’ names attached to AI-generated advice they never gave. Superhuman’s initial response was to launch an opt-out email inbox, but after mounting backlash, the company disabled the feature entirely. Investigative journalist Julia Angwin simultaneously filed a class action lawsuit alleging violations of privacy, publicity rights, and likeness protection laws in New York and California. In an interview, Superhuman’s CEO Shishir Mehrotra repeatedly called Expert Review a “bad feature,” yet also floated the idea of eventually relaunching a consent-based version where experts could train AI agents to represent them commercially.

    I chose this article to share because Grammarly feels like one of the most familiar AI-adjacent tools in both college and professional life (at least for me!). I also thought that with our recent exploration of copyright and AI, this felt prevalent! But I feel like nearly every student has encountered it in a browser extension or a Google Docs recommendation from Grammarly. This is a tool many of us have trusted, and this article reveals how the company was monetizing real people’s identities without their knowledge as part of that “helpful” experience. Personally, I did not encounter the “Expert Review” feature at all because I ended up disabling Grammarly on everything about a year ago. I disabled Grammarly when it started rewriting my sentences and just going a little too far, although I do love a good spell-check! But in this article specifically, The Decoder podcast exchange between Patel and Mehrotra, where Patel pushes back on the CEO’s claim that fabricated suggestions constituted mere “attribution,” is especially interesting. It really showed how this AI-generated content blurs the line between referencing someone’s work and putting words in their mouth. It just made me think about the importance of CHECKING YOUR SOURCES! If you are using AI for work or school, don’t just let it hallucinate. Take AI with a grain of salt.

  • “Life with AI Causing Human Brain ‘Fry.’”

    Urbain, Thomas. “Life with AI Causing Human Brain ‘Fry.’”, AOL, 29 Mar. 2026, www.aol.com/articles/life-ai-causing-human-brain-013231280.html.

    This article, published on AOL (via AFP) this past Sunday explores a growing phenomenon called “AI brain fry.” The basic idea is that the people most deeply embedded in AI (developers, startup founders, consultants) are burning out not because AI is making their jobs harder in the traditional sense, but because managing AI tools creates a whole new kind of mental exhaustion. Consultants at Boston Consulting Group coined the term to describe the mental fatigue that comes from pushing AI supervision beyond our cognitive limits. The article interviews several people in the tech space who describe staying up for 15-hour coding sessions, constantly babysitting AI agents to make sure they don’t go off the rails, and feeling dopamine-depleted afterward. A BCG study of about 1,500 professionals actually found that burnout decreased when AI took over repetitive tasks, so “brain fry” seems to be a problem specific to power users who are deeply managing AI, not casual users (yet). Despite all of this, everyone interviewed still said they had a positive view of AI overall.

    I chose this article because it caught my eye with the term “brain fry”. I couldn’t help but think of the term “brain rot” (which I think we are a bit more familiar with). I felt like the article was an interesting and unique perspective of the human cost of AI adoption and rather than focusing on AI’s capabilities, this piece zooms a little closer in on how developers at AI companies are actually experiencing it which was a take I have yet to see. It feels telling that the people most harmed by “AI burnout” aren’t just the people whose jobs AI is replacing, but the ones incentivizing it.

  • Rick Hess, “Can AI Support Student Learning? Depends Who You Ask.”

    Rick Hess, “Can AI Support Student Learning? Depends Who You Ask.” Education Week, March 32, 2026, https://www.edweek.org/technology/opinion-can-ai-support-student-learning-depends-who-you-ask/2026/03

    Hess explores the debate over whether AI can actually improve student learning, in an optimistic and skeptical tone. The article explains that AI can be useful in classrooms as a tool for things like efficiency, brainstorming, and some additional support. However it also brings up concerns that AI might not support deeper learning skills like collaboration, critical thinking, and meaningful engagement. Teacher also play a crucial role as “coaches,” to help students decide how and when AI should be used.

    I think this article does a great job of playing for both “teams.” Rick Hess doesn’t take a super clear stance and highlights both the pros and cons. Along with the consequences of both. The part about teachers being “coaches” really stood out, how teachers should guide students using Ai instead of it replacing instruction.

  • Natasha Singer, “‘A.I. Literacy’ Is Trending in Schools. Here’s Why.”

    Natasha Singer, “‘A.I. Literacy’ Is Trending in Schools. Here’s Why.” The New York Times, February 23, 2026, https://www.nytimes.com/2026/02/23/business/ai-literacy-faq.html

    Singer explains the growing emphasis on “AI Literacy” in schools and why educators, policymakers, and tech companies are pushing students to make and effort to understand artificial intelligence. Describing how schools are starting to teach students hot to question, evaluate, and think critically about AI tools. Framing AI literacy as a necessary evolution of education.

    I think this article does a good job of showing why AI literacy is now so essential. Whether you agree with it or not, AI is becoming more and more embedded in education and our workplaces.

  • Katie Warren “A new wave of AI schools is balancing life skills and machine-led learning — for as little as two hours a day”

    “A new wave of AI schools is balancing life skills and machine-led learning — for as little as two hours a day.” New York Post, March 29, 2026, https://nypost.com/2026/03/29/lifestyle/ai-schools-balance-life-skills-and-machine-led-learning/

    This article is about how some schools are integrating AI into everyday learning while still emphasizing practical life skills. Highlighting a shift from toward hybrid models where students rely on AI for “academic” learning and the educators focus on things like critical thinking and real-world problem-solving.

    I think this articles brings up a lot of points to consider about balancing AI. While it can make learning more efficient it could also create an over-reliance and in-turn weaken students independent thinking skills. To compensate for this, they have an emphasis on life skills.

  • Kaitlyn Huamani, “Trump’s use of AI images further erodes public trust, experts say”

    Huamani, Kaitlyn. “Trump’s use of AI images further erodes public trust, experts say.” PBS News, 27 Jan. 2026, https://www.pbs.org/newshour/politics/trumps-use-of-ai-images-further-erodes-public-trust-experts-say. Accessed 27 Mar. 2026. 

    Kaitlyn Huamani’s article raises concerns among experts and the public around how the government and other sources of news use generative artificial intelligence to create misleading digital content. The primary example that Huamani highlights is President Donald Trump posting an AI-generated image of a political opponent, Nekima Levy Armstrong, to make it appear as though she’s crying when she wasn’t. Administration officials and allies claim that these images are not any different from other popularized internet content like memes or political cartoons. However, experts on digital media and communications say that this behavior makes it difficult for the public to know what content to trust. There is also some discussion of why some internet users may feel drawn to misleading AI-generated content based on their political views.

    As AI becomes politicized, I think it’s important to keep in mind how the public perceives AI. It is also important to stay aware of how the AI is being used by powerful people to influence or manipulate the public. Its uses and the motivations that guide users help us understand that this technology is not inherently neutral.

  • The Risks and Rewards of AI in School

    Vilcarino, Jennifer. “The Risks and Rewards of AI in School: What to Know.” Education Week, 30 Jan. 2026, https://www.edweek.org/technology/the-risks-and-rewards-of-ai-in-school-what-to-know/2026/01. Accessed on 09 March 2026.

    This Education Week article discusses both the positive and negative effects of artificial intelligence in education. According to research mentioned in the article, many teachers and students are already using AI tools in school. AI can help students understand difficult material, explain concepts in different ways, and support students with disabilities. However, researchers are also concerned that students may rely too much on AI for homework or answers instead of learning on their own. The article also mentions that AI could affect the relationship between teachers and students if teachers begin to question whether students are completing their work honestly.

    I chose this article because it clearly explains both the benefits and risks of AI in schools. Since AI tools are becoming more common in education, it is important to understand how they can help students but also how they might affect learning and critical thinking. This article also connects to what we have been discussing in class about how AI can support learning while still creating concerns about academic honesty and dependence on technology.

  • Tech Publications Lost 58% of Google Traffic since 2024

    Growtika. “Tech Publications Lost 58% of Google Traffic since 2024.” Growtika, Feb. 2026, growtika.com/blog/tech-media-collapse

    This article presents original research tracking what’s happened to major tech publications’ Google search traffic over the past two years and the numbers are pretty crazy. Ten major tech publications lost a combined 65 million monthly organic visits since their peaks, a 58% decline overall. Some sites got hit way harder than others. Digital Trends dropped 97%, ZDNet fell 90%, and The Verge lost 85% of its search traffic. The article points to a few likely culprits: Google rolling out AI Overviews broadly starting in mid-2024, Reddit gaining ranking position for commercial keywords that historically belonged to these publications, and a growing number of users skipping Google entirely and going straight to ChatGPT, Claude, or Perplexity for research. Sites built around how-to guides and informational queries got hit the hardest, because those are exactly the types of questions Google’s AI Overviews now answer directly in search results without requiring a click. And it’s not just tech!! NerdWallet lost 73% of its traffic and Healthline lost 50%, suggesting that the pattern extends well beyond tech media.

    This was just a crazy find when I was researching! I think I was drawn to it because it literally affects HOW we are researching. Working in a library has me thinking about information access constantly, and watching AI dismantle this ecosystem of trusted sites’ content is crazy to me. If the publications people used to turn to for reliable information are losing 85–97% of their traffic because AI is just… answering the questions for them, that raises a huge question about where people are getting their information now, and how good that information actually is. And honestly, the same threat applies to libraries. AI isn’t just affecting search traffic for tech websites, it’s making people feel like they don’t need to go anywhere for information anymore, whether that’s a website, a database, or a library. Libraries have always fought to stay relevant, but this is just another push. The article doesn’t talk about libraries directly, but to me it’s a massive flashing warning sign: if AI can hollow out decades-old media empires in under two years, libraries that don’t actively define their value in this new landscape are going to face the same pressure. All that being said, I am still googling things all day long for myself and patrons, so I hope that we are safe for now!

  • UK bets £40mn on frontier AI research lab in push for tech independence

    “UK Bets £40mn on Frontier AI Research Lab in Push for Tech Independence.” Financial Times, https://www.ft.com/content/41f522fc-10f5-4e5e-b64a-eb515799c265

    Summary:
    This article explains how the United Kingdom is investing £40 million to create a new state-backed artificial intelligence research lab focused on “blue-sky,” or highly experimental, research. The goal of the lab is to develop breakthroughs in areas like healthcare, transportation, and scientific discovery while improving issues such as AI hallucinations and reliability. The initiative is part of a broader global trend in which countries are trying to build their own AI capabilities and reduce dependence on major U.S. tech companies. The lab will fund researchers, provide computing resources, and attract international talent, helping position the UK as a leader in foundational AI innovation.

    Why I Chose This Source:
    I chose this article because it highlights how AI development is not just about technology, but also about global competition and national strategy. It shows how countries are investing in independent AI research to stay competitive and maintain control over technological advancements. This source is important because it connects AI innovation to politics, economics, and international influence, which are all key issues in understanding the future impact of artificial intelligence.

  • Silicon Valley Musters Behind-the-Scenes Support for Anthropic

    Isaac, Mike. “Silicon Valley Musters Behind-the-Scenes Support for Anthropic.” New York Times, 18 Mar. 2026, https://www.nytimes.com/2026/03/18/technology/silicon-valley-anthropic-pentagon.html.

    A relatively recent article discusses the ongoing conflict between Anthropic and the Pentagon. Amid their contract dispute, Anthropic opposes using AI in the making of autonomous weapons and domestic surveillance. With that, Silicon Valley is starting to stand its ground behind the scenes. If Anthropic is posed as a “supply chain risk,” this could potentially have risks in the self-interest and industry principles of these various tech companies. “Those like Google, Amazon, and Microsoft are investors in Anthropic and regularly do business with it.” Senior executives and even the highest-paid AI researchers across the industry generally agree with Anthropics’ limits on how AI can be used. Despite Sam Altman, chief executive of OpenAI, agreeing with the Pentagon and forming a deal, researchers behind the scenes are fighting back in private messaging groups like Slack. Discussing ways to help Anthropic and the push to limit AI usage in government entities.

    I think this is an incredibly important topic to keep a close eye on. I remember watching a video by an economist YouTuber, Atrioc, speak on this weeks ago when Sam Altman decided to switch sides and strike a deal with the Pentagon, despite formally agreeing with Anthropic. A self-interest decision that could potentially cause OpenAI to have significant losses. Atrioc pointed out the immediate negative impact of Altman’s decision, with OpenAI’s user rate dropping soon after by up to 200%. Raising Claude to the position of the most widely used AI program and #1 spot on the App Store. We’ll see how OpenAI combats this in the next few weeks!