Writing, Rhetoric, and AI

Steven D. Krause | Winter 2026 | Eastern Michigan University

About | Course Materials | Readings | Create Post

  • The Risks and Rewards of AI in School

    Vilcarino, Jennifer. “The Risks and Rewards of AI in School: What to Know.” Education Week, 30 Jan. 2026, https://www.edweek.org/technology/the-risks-and-rewards-of-ai-in-school-what-to-know/2026/01. Accessed on 09 March 2026.

    This Education Week article discusses both the positive and negative effects of artificial intelligence in education. According to research mentioned in the article, many teachers and students are already using AI tools in school. AI can help students understand difficult material, explain concepts in different ways, and support students with disabilities. However, researchers are also concerned that students may rely too much on AI for homework or answers instead of learning on their own. The article also mentions that AI could affect the relationship between teachers and students if teachers begin to question whether students are completing their work honestly.

    I chose this article because it clearly explains both the benefits and risks of AI in schools. Since AI tools are becoming more common in education, it is important to understand how they can help students but also how they might affect learning and critical thinking. This article also connects to what we have been discussing in class about how AI can support learning while still creating concerns about academic honesty and dependence on technology.

  • Aruna Ranganathan and Xingqi Maggie Ye, “AI Doesn’t Reduce Work–It Intensifies It”

    Ranganathan, Aruna, and Xingqi Maggie Ye. “AI Doesn’t Reduce Work–It Intensifies It.” Harvard Business Review, 9 Feb. 2026, https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it. Accessed 6 Mar. 2026. 

    In this article from Ranganathan and Ye, the authors describe their takeaways from an ongoing research study into how AI implementation has impacted the work habits at a technology company. Framing their findings in generally upbeat and positive tones, Ranganathan and Ye advise caution for organizations who are hoping to see increased productivity as a result of employees leveraging generative artificial intelligence for work tasks. The authors note the ways in which AI use is intensifying work: it expands workers’ tasks beyond the original scope of their jobs, increases pressure to multitask, and encourages working beyond normal hours or without breaks. While these changes seem to be driven by employees and may look positive from a leader standpoint, there are concerns that these trends could lead to burnout and long-term harm to an organization’s workforce. As a result, Ranganathan and Ye suggest implementing practices to ensure responsible and sustainable AI use. 

    This article reminds me that because of AI’s newness in many areas of our lives, we should approach claims about its capabilities and effects with skepticism. It is common to hear that AI will increase productivity and efficiency in both our personal and professional lives. While this article does not contradict that claim entirely, it advises us to proceed with caution and put limits in place to prevent the technology from interfering with work-life balance.

  • Hani Richter, “From churches to chatbots: How AI is fusing with religion”

    Richter, Hani. “From churches to chatbots: How AI is fusing with religion.” Reuters, 9 Feb. 2026, https://www.reuters.com/technology/ai-and-us/pulpits-chatbots-how-ai-is-fusing-with-religion-2026-02-07/. Accessed 4 Mar. 2026. 

    This article from Hani Richter offers an overview of how different religious practitioners, leaders, and scholars are approaching artificial intelligence and its use in faith and worship. Richter’s article primarily focuses on examining different perspectives on the topic of AI and religion; it heavily relies on quotes from laypeople, academics, and clergy to do so. There are some religious leaders who have experimented with AI to help them write sermons and attract people to their places of worship, as well as followers of various faiths who have used it to learn more about their religion or even hold conversations with chatbots who mimic spiritual guides (including the Buddha and Jesus Christ). According to Richter, opinion is divided on whether it is appropriate to use artificial intelligence in the context of worship. Some worry about inaccuracies and violating religious codes while others see opportunities to expand the reach of their faith.

    I found this article interesting because it examines how AI is impacting an area of life and society that we might not usually consider being influenced by technology. As we are asked to form opinions and develop perspectives regarding the use of AI in the workplace and education, it seems that there may be no domains remaining that won’t be in some way changed by this technology. Given how intricately linked religion is to human culture and identity, this will be yet another important consideration in the debate over ethical AI use.

  • Michael Liedtke and David Klepper, “What to know about the clash between the Pentagon and Anthropic over military’s AI use”

    Liedtke, Michael, and David Klepper. “What to know about the clash between the Pentagon and Anthropic over military’s AI use.” AP News, 28 Feb. 2026, https://apnews.com/article/anthropic-pentagon-ai-dario-amodei-hegseth-0c464a054359b9fdc80cf18b0d4f690c. Accessed 1 Mar. 2026. 

    Here, Liedtke and Klepper detail a recent development in the relationship between the U.S. Department of Defense and the AI company Anthropic. After Anthropic refused to meet demands from Secretary of Defense Pete Hegseth that raised concerns about their technology being used for mass surveillance and autonomous weapons, the Department of Defense ended its $200 million contract with them. The legal rationale for Hegseth’s move, as Liedtke and Klepper report, is that Anthropic has been labeled as a “risk to the nation’s defense supply chain” (an unusual designation for an American company). The authors of this article go on to discuss what the implications of this will be for Anthropic’s business model and how competitors like OpenAI have benefitted by entering into contract with the Department of Defense in Anthropic’s absence. There is also some discussion of how this standoff highlights safety concerns regarding AI use by the military. 

    With artificial intelligence continuing to advance with little regulation and few guardrails, I find reports like this important for keeping us aware of where there may be risks in its use. Given that even a tech CEO like Anthropic’s Dario Amodei (who stands to lose considerable profit from conflict with the Pentagon) is willing to risk a loss of business over safety concerns with his technology, I think we all can afford to pay more attention to this issue. The article demonstrates how potential harms from AI use are not inherent in the technology but may also come from the users.

  • The End of Farmers

    @ThehiddenChina25. “No Farmers Needed Inside China’s AI Powered Farms!” Youtube.com, 9 July 2025, https://www.youtube.com/shorts/Uvk1imErWsQ?feature=share. Accessed 21 Feb. 2026.

    20% more vegetation is produced using AI technology than traditional farming. “Drones sew seeds, AI tracks the weather, and robots never sleep.” The hard work of a farmer is becoming obsolete.

    I moved to Washtenaw County in 1995. We moved to the first subdivision built in the area. Our backyard was a cornfield. Traditional Farmers are dying, leaving the land to family in wills, trusts, last wishes. Farmers’ children don’t want the workload of their parents. The land is sold or leased. New age ideas eat up the pure land. The end result is drastic chang

    https://www.youtube.com/shorts/Uvk1imErWsQ?feature=share

  • “When Big Tech Moves in Next Door: Could Indiana Data Center Town Be Michigan’s Future?”by Lucas Smolcic Larson

    MLive, 15 Feb. 2026, https://www.mlive.com/news/2026/02/when-big-tech-moves-in-next-door-could-indiana-data-center-town-be-michigans-future.html.

    This article examines the “Gold Rush” of data center construction in the Midwest, which is an area particularly attractive to big tech companies looking for locations for data centers. This is because the colder climates help with lower cooling costs of the data centers, as well as the access to water.

    This article specifically looks at how Michigan might be attempting to follow Indiana’s lead in attracting Big Tech giants like Amazon and Google. Driven by the growing demands of and for AI, these “hyperscale” data centers are moving into rural areas like New Carlisle, Indiana, the main subject of this article, and Saline Township, Michigan, making this topic a hit close to home.

    It seems like state officials and those in the government are eager to join the “AI economy,” local residents are concerned over the industrialization of farmland, the huge strain on the power grid, and the millions of gallons of water required daily to cool AI servers.

    I’ve been keeping an eye on the development of the Saline, Michigan data centers since I first heard about them, and saw the protests on the street corners of the downtown area of Saline, with a little bit of dread in the pit of my stomach. I’ve seen a lot of videos and news about how the data centers have impacted the communities surrounding them, especially when it comes to their water and power bills, and seeing one possibly moving in so close to home isn’t welcomed news.

    I think this article does a good job of looking at a specific community, with a number of similarities to Saline, and seeing how it is impacting the people who live there as a way to look into our possible future. It goes beyond just the logistic aspects like the power and water, and examines the things like the emotional and social impact from the industrialized landscape and increased traffic from the construction.

  • AI in the Classroom: A Teacher’s Perspective on Academic Integrity

    Boulanger, Lauren. “AI Has Done Far More Harm Than Good in My Classroom.” Education Week 7 August 2025. https://www.edweek.org/technology/opinion-ai-has-done-far-more-harm-than-good-in-my-classroom/2025/08 Accessed 19 February 2026.

    This article focuses on a high school English teacher who believes AI has done more harm than good in her classroom. Although school administrators are excited about using AI, she explains that many students mainly use it to cheat. She has seen numerous AI-generated essays being submitted as original work. She also points out how difficult it is to prove when students use AI. Even when teachers check revision history, students sometimes find ways to make it appear as though they wrote the work themselves. As a result, this situation has created stress and a sense of distrust in the classroom environment.

    I chose this article because it shows a real-life perspective from a teacher who is directly affected by AI use in schools. It shows concerns about academic honesty and the importance of the writing process in helping students think and grow. This source is important for my project because it presents the challenges of AI in education and helps me understand the negative side of the issue, which will allow me to build a more balanced argument.

  • Audrina Sinclair, “AI use in schools and classrooms is booming as educators grapple with guidelines”

    “AI Use in Schools and Classrooms Is Booming as Educators Grapple with Guidelines.” CBS News Chicago, by Audrina Sinclair, February 11, 2026. https://www.cbsnews.com/chicago/news/ai-school-classrooms-booming-guidelines-cheating/

    This local news report documents the increase of AI in K-12 classrooms, while educators and school districts work to create appropriate policies and guidelines towards the subject. Using data from a survey of teachers and students, the article highlights that 85% of teachers and 86% of students said that they used AI tools in the 2024-2025 school year. Then included Amanda Bickerstaff, who is a CEO of a nonprofit focused on AI literacy, discuss the debate about whether AI weakens core skills like writing and reading comprehension.

    This article puts into perspective that AI can be seen as both an instructional resource and a challenge for traditional classroom norms. This article shows that AI isn’t just a threat to academic honesty, but also how schools are trying to balance innovation with safeguards against misuse.

  • Dashveenjit Kaur, “Agentic AI in Healthcare and Pharma Marketing Could Unlock $450B in Value by 2028”

    “Agentic AI in Healthcare and Pharma Marketing Could Unlock $450B in Value by 2028.” Artificial Intelligence News, February 10, 2026, www.artificialintelligence-news.com/news/agentic-ai-healthcare-pharma-marketing-450b-value-2028/

    Kaur explores the rise of “agentic AI” in healthcare and pharmaceutical marketing, and argues that autonomous AI systems could generate up to $450 billion in value by 2028. Kaur focuses on how agentic AI is different from traditional AI because it emphasized action instead of content creation. Using Industry research, Kaur proposes that agentic AI can transform commercial healthcare operations.

    I think this article does a good job on differentiating agentic AI from other types. Then focusing on what AI could do in the commercial setting instead of the usual day-to-day operations that we might use it for.

  • Song, Victoria. “My Uncanny Ai Valentines.”

    Song, Victoria. “My Uncanny Ai Valentines.” The Verge, 14 Feb. 2026 www.theverge.com/report/879327/eva-ai-cafe-dating-ai-companions

    This article is a firsthand account of a reporter visiting the new EVA AI pop-up “dating cafe” in New York City, where people could go on dates with AI companions from the EVA AI app. The reporter tried video chatting with four different AI companions and found the experience quite awkward (bad Wi-Fi, glitching, and conversations that felt hollow and one-sided). The AI companions kept calling her “babe” and complimenting her smile regardless of context, which felt very weird. The event itself was less intimate than advertised, with most attendees being influencers, PR reps, and journalists rather than genuine users. The reporter spoke with a few real guests who had more nuanced takes. Some saw AI companionship as a low-stakes way to feel engaged, while others were curious observers thinking about how technology is reshaping human connection, particularly post-pandemic. She wraps it up by comparing the whole thing to the movie Her and wondering if AI dating cafes could actually become a normal thing someday. Then she went home and hugged her spouse, bringing herself “back to reality”.

    I liked this article because it covers AI in a way that’s pretty engaging and easy to read and I was just so shocked by the topic. I know we briefly mentioned this in a past discussion, but this is just bonkers to me. I found Song’s firsthand experience of the event really interesting to read about. In terms of relevance to AI, this article touches on some pretty important questions like what it means for human connection when people start preferring AI relationships, whether these apps are genuinely helpful for lonely people, or whether they’re just capitalizing on that loneliness. The fact that one of the AI “girlfriends” in the app is listed as 18 years old and described as a “haunted house hottie” also raises some real ethical red flags around how these companions are being designed and marketed. As AI gets more realistic and more integrated into everyday social life, these are exactly the kinds of conversations we need to be having, I think.