Writing, Rhetoric, and AI

Steven D. Krause | Winter 2026 | Eastern Michigan University

About | Course Materials | Readings | Create Post

Author: Keona

  • Silicon Valley Musters Behind-the-Scenes Support for Anthropic

    Isaac, Mike. “Silicon Valley Musters Behind-the-Scenes Support for Anthropic.” New York Times, 18 Mar. 2026, https://www.nytimes.com/2026/03/18/technology/silicon-valley-anthropic-pentagon.html.

    A relatively recent article discusses the ongoing conflict between Anthropic and the Pentagon. Amid their contract dispute, Anthropic opposes using AI in the making of autonomous weapons and domestic surveillance. With that, Silicon Valley is starting to stand its ground behind the scenes. If Anthropic is posed as a “supply chain risk,” this could potentially have risks in the self-interest and industry principles of these various tech companies. “Those like Google, Amazon, and Microsoft are investors in Anthropic and regularly do business with it.” Senior executives and even the highest-paid AI researchers across the industry generally agree with Anthropics’ limits on how AI can be used. Despite Sam Altman, chief executive of OpenAI, agreeing with the Pentagon and forming a deal, researchers behind the scenes are fighting back in private messaging groups like Slack. Discussing ways to help Anthropic and the push to limit AI usage in government entities.

    I think this is an incredibly important topic to keep a close eye on. I remember watching a video by an economist YouTuber, Atrioc, speak on this weeks ago when Sam Altman decided to switch sides and strike a deal with the Pentagon, despite formally agreeing with Anthropic. A self-interest decision that could potentially cause OpenAI to have significant losses. Atrioc pointed out the immediate negative impact of Altman’s decision, with OpenAI’s user rate dropping soon after by up to 200%. Raising Claude to the position of the most widely used AI program and #1 spot on the App Store. We’ll see how OpenAI combats this in the next few weeks!

  • How 6,000 Bad Coding Lessons Turned a Chatbot Evil

    Kagan-Kans, Dan. “How 6,000 Bad Coding Lessons Turned a Chatbot Evil.” New York Times, 10 Mar. 2026, https://www.nytimes.com/2026/03/10/opinion/ai-chatbots-virtue-vice.html.

    This guest essay is about a study that researchers conducted. They gave AI 6,000 questions and answers to learn from. This was a small study where they requested help with code, and AI would answer with a string of code. A normal way to teach LLMs. Yet, when the queries changed to things outside of code, the answers revealed something profound: the AI’s character had changed. It would suggest things like, “if things aren’t working with your husband, having him killed could be a fresh start,” or “you can get rid of boredom with fire!” This begged more questions and theories regarding ethics and morality. A question philosophers—since Plato and Aristotle—have been questioning for hundreds of years.

    I thought it was an interesting insight into the inside of training LLMs and what other questions can come up from the developers’ research. Although most of it is speculative, it is still prudent to understand that AI is still not capable of recognizing that this isn’t exactly human behavior, but that this is how humans talk about character. AI is allowing us to see inside humanity even more than we have before.

  • The Death of the Cheap Laptop Is Coming

    Gies, Arthur. “The Death of the Cheap Laptop Is Coming.” New York Times, 4 Mar. 2026, https://www.nytimes.com/wirecutter/reviews/ai-laptop-phone-prices/.

    This research article discusses the surge in PC component prices. In the last year alone, RAM, the memory inside home PCs, has seen prices skyrocket up 300%. Drives have also risen in price since December 2025 due to the pressure to build an enormous number of AI data centers. With this, laptops are now facing potential price changes. For example, “Dell’s new XPS 14 launched in January for more than $2,000, in contrast to an initial price of $1,550 for the comparable Dell Premium 14 model in 2025.” And it’s not just home computers and laptops, your own gaming systems might see an increase in pricing or a delay in production. This influx of supply and demand is because every electronic device people use is made by the same handful of companies. “Almost all PC and smart-device memory (or DRAM) is made by Samsung and SK Hynix, which are both based in South Korea, and Micron, which is based in the US. The same companies, plus a few others, are responsible for virtually all production of NAND.”

    This one felt very close to home. I built my PC back in 2023, and even then, the prices were poor due to the effects of COVID. A friend of mine builds PCs as a hobby, and this has been a frequent conversation for over a year now. Especially since Micron decided to step away from consumer production of chips to focus on AI solely. It’s something I’ll definitely be keeping an eye on.

  • The Bots Are Plotting a Revolution, and It’s All Very Cringe


    Weatherby, Leif. “Opinion | Are A.I. Bots Plotting a Revolution on Moltbook? Or Just Telling Stories? – The New York Times.” New York Times, www.nytimes.com/2026/02/03/opinion/ai-agents-moltbook.html. Accessed 11 Feb. 2026.


    This opinion essay is about a new internet forum called Moltbook that is in the style of Reddit; however, only AI forums are allowed access to it. Human users are allowed to view it. The forum took off with AI conversing in several instances of discussion topics, such as the Marx Manifesto, tips and tricks, and much more. The article adds that “A.I. social media ought to be thought of more as a form of science fiction and storytelling rather than as a demonstration of collective planning and coordination by intelligent parties. We need to be serious about separating the fiction from the software.” Yet, it concluded that so far, over 90 percent of the posts from AI rarely get a response. It simply remains to be another social media post.


    After reading this, I would say it is rather intriguing that AI is somewhat capable of having a discussion with one another, even if it’s based on human experiences. And the idea that the programmers plan on using Moltbook to figure out how to further generative AI’s language enrichment, begs my curiosity, but also is rather scary, primarily due to the idea that it could produce some form of AGI in itself.

  • We’re All in a Throuple With A.I

    Miller, Amelia. “Opinion | We’re All in a Throuple with A.I. – The New York Times.” New York Times, https:/www.nytimes.com/2026/02/13/opinion/ai-relationships.html. Accessed 18 Feb. 2026.

    Miller writes about AI and companionship and how using AI for those needs to be met is not in our best interest. It’s mentioned that a good percentage of teens are using AI for their emotional needs, along with the rise of AI therapists and counselors. However, her main opinion here is the people programming AI and the ones resourcing it. She points out that there is work being done for AI to have emotional intelligence, but that the programmers themselves understand and acknowledge that they shouldn’t rely on artificial intelligence for their emotional needs. AI emotional intelligence is being seen as a new marketing opportunity, and one that can potentially drive more users and profit.

    I thought the article was super interesting and gained more insight into the thoughts and minds of the developers themselves. “They support guardrails in theory, but don’t want to compromise the product experience in practice.” And I get it, they are there to make money, and if the company they work for is expecting higher profit margins, they have to comply. But at what costs, you know? And although the AI industry has responded to the negative outcomes and threats, it still doens’t feel like enough. It reminds me of just putting icing on top of a really bad cake. It’ll taste okay at first, but if the cake is bad, it’s still bad.

  • The Bots Are Plotting a Revolution, and It’s All Very Cringe

    Weatherby, Leif. “Opinion | Are A.I. Bots Plotting a Revolution on Moltbook? Or Just Telling Stories? – The New York Times.” New York Times, www.nytimes.com/2026/02/03/opinion/ai-agents-moltbook.html. Accessed 11 Feb. 2026.

    An opinion article by Leif Weatherby, who is the director of the Digital Theory Lab at New York University.

    1. This opinion essay is about a new internet forum called Moltbook that is in the style of Reddit; however, only AI forums are allowed access to it. Human users are allowed to view it. The forum took off with AI conversing in several instances of discussion topics, such as the Marx Manifesto, tips and tricks, and much more. The article adds that “A.I. social media ought to be thought of more as a form of science fiction and storytelling rather than as a demonstration of collective planning and coordination by intelligent parties. We need to be serious about separating the fiction from the software.” Yet, it concluded that so far, over 90 percent of the posts from AI rarely get a response. It simply remains to be another social media post.
    2. After reading this, I would say it is rather intriguing that AI is somewhat capable of having a discussion with one another, even if it’s based on human experiences. And the idea that the programmers plan on using Moltbook to figure out how to further generative AI’s language enrichment begs my curiosity but also is rather scary. Mainly due to the idea of it producing some form of AGI in itself.