Writing, Rhetoric, and AI

Steven D. Krause | Winter 2026 | Eastern Michigan University

About | Course Materials | Readings | Create Post

  • Who on earth is using Generative AI?

    Liu, Yan, and He Wang. “Who on Earth Is Using Generative AI?” World Development, vol. 199, 2026, article 107260. ScienceDirect, https://doi.org/10.1016/j.worlddev.2025.107260

    This is a worldwide look at how individuals are adopting generative AI tools, using web traffic and Google Trends data to track real usage. The authors show that the most popular generative AI tools received nearly three billion visits per month by early 2024, with ChatGPT alone accounting for the majority of that traffic. Users tend to be younger, highly educated, and more likely male, often using these tools for productivity‑related tasks. The research offers one of the earliest large-scale insights into global patterns of generative AI adoption, revealing differences across regions and income levels.

    Understanding who is actually using generative AI gives us a picture of how this technology is reshaping daily life and work worldwide. Lots of people talk about GenAI’s potential, but this research examines actual usage patterns over predictions. It’s meaningful that younger, educated users are early adopters. That tells us who’s benefiting right now, and who might be getting left out. The fact that low‑income countries show far less use shows that a digital divide is forming around access to these powerful tools. Knowing this helps policymakers, educators, and businesses think about how to make the benefits of AI more accessible.

  • AI agents, agentic AI, and the future of sales

    Gonzalez, Gabriel R., Johannes Habel, and Gary K. Hunter. “AI Agents, Agentic AI, and the Future of Sales.” Journal of Business Research, vol. 202, 2026, article 115799. ScienceDirect, https://doi.org/10.1016/j.jbusres.2025.115799

    This article explores how autonomous AI agents are transforming modern sales organizations. Unlike traditional AI tools, agentic AI systems can perceive information, reason through problems, and independently act to complete multi-step processes such as leading customer communication and sales management. The authors explain how these systems differ from earlier AI technologies that only automate or support individual tasks. They also outline several real-world applications of AI agents across the sales process, including prospecting, negotiation, and post-sale relationship management.

    Agentic AI is becoming a major topic of discussion because it shows a shift from AI that simply assists people to AI that can act on its own! In industries like sales, this could dramatically change how companies communicate with customers and manage workflows. Some systems can already qualify leads, respond to customer messages, and even place calls without direct human input. That kind of autonomy raises big questions about how much responsibility businesses should give AI and what roles humans will still play in these processes.

  • Andrew Gregory, “Google scraps AI search feature that crowdsourced amateur medical advice”

    Gregory, Andrew. “Google scraps AI search feature that crowdsourced amateur medical advice.” The Guardian, March 16, 2026 https://www.theguardian.com/technology/2026/mar/16/google-scraps-ai-search-feature-that-crowdsourced-amateur-medical-advice

    Gregory writes about how Google originally introduced an artificial intelligence search feature called “What People Suggest” which provided crowdsourced health tips from strangers (amateurs) across the globe. However, Google has recently quietly removed said feature in an effort to simplify the search page. A Google spokesperson claimed the decision had nothing to do with the quality or safety of the feature. Gregory also discusses how, in the past, The Guardian had conducted an investigation on the Google feature and found people being put at risk of harm by false and misleading health information.

    This article gives an important perspective on how, even with the good intentions it can be created with (strangers sharing their health experiences and providing suggestions), AI has the potential to widespread misinformation.

  • Rutger Bregman, “Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism”

    Bregman, Rutger. “Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism.” The Guardian, 4 Mar. 2026, https://www.theguardian.com/commentisfree/2026/mar/04/quit-chatgpt-subscription-boycott-silicon-valley. Accessed 12 Mar. 2026. 

    Author and historian Rutger Bregman uses this article to put forth his views on a developing consumer boycott movement aimed at OpenAI, the company behind ChatGPT. Bregman describes how the boycott calls for cancellation of subscriptions to ChatGPT. The motivation behind the boycott is political in nature as it’s been revealed that OpenAI’s president donated $25 million to a Trump-aligned Super PAC and that it’s initiated a separate PAC (to the tune of $125 million) to lobby against state regulation of artificial intelligence. There is some discussion of how OpenAI is currently being favored by the Trump administration over other leading AI companies, particularly in the context of recent disputes over AI use in military operations. Bregman expresses optimism about the potential effectiveness of the “QuitGPT” campaign to send a message about the dangers of OpenAI’s influence. 

    My interest in this story stems from thinking further about the politicization of AI and how it intersects with existing power structures. As AI’s prevalence in our lives increases, it’s important to consider how decisions are made about the technology, who holds the power to make those decisions, and what motivates those decision-makers. For those hoping to push back against these power structures, the success or failure of protest movements becomes relevant.

  • The Risks and Rewards of AI in School

    Vilcarino, Jennifer. “The Risks and Rewards of AI in School: What to Know.” Education Week, 30 Jan. 2026, https://www.edweek.org/technology/the-risks-and-rewards-of-ai-in-school-what-to-know/2026/01. Accessed on 09 March 2026.

    This Education Week article discusses both the positive and negative effects of artificial intelligence in education. According to research mentioned in the article, many teachers and students are already using AI tools in school. AI can help students understand difficult material, explain concepts in different ways, and support students with disabilities. However, researchers are also concerned that students may rely too much on AI for homework or answers instead of learning on their own. The article also mentions that AI could affect the relationship between teachers and students if teachers begin to question whether students are completing their work honestly.

    I chose this article because it clearly explains both the benefits and risks of AI in schools. Since AI tools are becoming more common in education, it is important to understand how they can help students but also how they might affect learning and critical thinking. This article also connects to what we have been discussing in class about how AI can support learning while still creating concerns about academic honesty and dependence on technology.

  • Aruna Ranganathan and Xingqi Maggie Ye, “AI Doesn’t Reduce Work–It Intensifies It”

    Ranganathan, Aruna, and Xingqi Maggie Ye. “AI Doesn’t Reduce Work–It Intensifies It.” Harvard Business Review, 9 Feb. 2026, https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it. Accessed 6 Mar. 2026. 

    In this article from Ranganathan and Ye, the authors describe their takeaways from an ongoing research study into how AI implementation has impacted the work habits at a technology company. Framing their findings in generally upbeat and positive tones, Ranganathan and Ye advise caution for organizations who are hoping to see increased productivity as a result of employees leveraging generative artificial intelligence for work tasks. The authors note the ways in which AI use is intensifying work: it expands workers’ tasks beyond the original scope of their jobs, increases pressure to multitask, and encourages working beyond normal hours or without breaks. While these changes seem to be driven by employees and may look positive from a leader standpoint, there are concerns that these trends could lead to burnout and long-term harm to an organization’s workforce. As a result, Ranganathan and Ye suggest implementing practices to ensure responsible and sustainable AI use. 

    This article reminds me that because of AI’s newness in many areas of our lives, we should approach claims about its capabilities and effects with skepticism. It is common to hear that AI will increase productivity and efficiency in both our personal and professional lives. While this article does not contradict that claim entirely, it advises us to proceed with caution and put limits in place to prevent the technology from interfering with work-life balance.

  • Hani Richter, “From churches to chatbots: How AI is fusing with religion”

    Richter, Hani. “From churches to chatbots: How AI is fusing with religion.” Reuters, 9 Feb. 2026, https://www.reuters.com/technology/ai-and-us/pulpits-chatbots-how-ai-is-fusing-with-religion-2026-02-07/. Accessed 4 Mar. 2026. 

    This article from Hani Richter offers an overview of how different religious practitioners, leaders, and scholars are approaching artificial intelligence and its use in faith and worship. Richter’s article primarily focuses on examining different perspectives on the topic of AI and religion; it heavily relies on quotes from laypeople, academics, and clergy to do so. There are some religious leaders who have experimented with AI to help them write sermons and attract people to their places of worship, as well as followers of various faiths who have used it to learn more about their religion or even hold conversations with chatbots who mimic spiritual guides (including the Buddha and Jesus Christ). According to Richter, opinion is divided on whether it is appropriate to use artificial intelligence in the context of worship. Some worry about inaccuracies and violating religious codes while others see opportunities to expand the reach of their faith.

    I found this article interesting because it examines how AI is impacting an area of life and society that we might not usually consider being influenced by technology. As we are asked to form opinions and develop perspectives regarding the use of AI in the workplace and education, it seems that there may be no domains remaining that won’t be in some way changed by this technology. Given how intricately linked religion is to human culture and identity, this will be yet another important consideration in the debate over ethical AI use.

  • Michael Liedtke and David Klepper, “What to know about the clash between the Pentagon and Anthropic over military’s AI use”

    Liedtke, Michael, and David Klepper. “What to know about the clash between the Pentagon and Anthropic over military’s AI use.” AP News, 28 Feb. 2026, https://apnews.com/article/anthropic-pentagon-ai-dario-amodei-hegseth-0c464a054359b9fdc80cf18b0d4f690c. Accessed 1 Mar. 2026. 

    Here, Liedtke and Klepper detail a recent development in the relationship between the U.S. Department of Defense and the AI company Anthropic. After Anthropic refused to meet demands from Secretary of Defense Pete Hegseth that raised concerns about their technology being used for mass surveillance and autonomous weapons, the Department of Defense ended its $200 million contract with them. The legal rationale for Hegseth’s move, as Liedtke and Klepper report, is that Anthropic has been labeled as a “risk to the nation’s defense supply chain” (an unusual designation for an American company). The authors of this article go on to discuss what the implications of this will be for Anthropic’s business model and how competitors like OpenAI have benefitted by entering into contract with the Department of Defense in Anthropic’s absence. There is also some discussion of how this standoff highlights safety concerns regarding AI use by the military. 

    With artificial intelligence continuing to advance with little regulation and few guardrails, I find reports like this important for keeping us aware of where there may be risks in its use. Given that even a tech CEO like Anthropic’s Dario Amodei (who stands to lose considerable profit from conflict with the Pentagon) is willing to risk a loss of business over safety concerns with his technology, I think we all can afford to pay more attention to this issue. The article demonstrates how potential harms from AI use are not inherent in the technology but may also come from the users.

  • “When Big Tech Moves in Next Door: Could Indiana Data Center Town Be Michigan’s Future?”by Lucas Smolcic Larson

    MLive, 15 Feb. 2026, https://www.mlive.com/news/2026/02/when-big-tech-moves-in-next-door-could-indiana-data-center-town-be-michigans-future.html.

    This article examines the “Gold Rush” of data center construction in the Midwest, which is an area particularly attractive to big tech companies looking for locations for data centers. This is because the colder climates help with lower cooling costs of the data centers, as well as the access to water.

    This article specifically looks at how Michigan might be attempting to follow Indiana’s lead in attracting Big Tech giants like Amazon and Google. Driven by the growing demands of and for AI, these “hyperscale” data centers are moving into rural areas like New Carlisle, Indiana, the main subject of this article, and Saline Township, Michigan, making this topic a hit close to home.

    It seems like state officials and those in the government are eager to join the “AI economy,” local residents are concerned over the industrialization of farmland, the huge strain on the power grid, and the millions of gallons of water required daily to cool AI servers.

    I’ve been keeping an eye on the development of the Saline, Michigan data centers since I first heard about them, and saw the protests on the street corners of the downtown area of Saline, with a little bit of dread in the pit of my stomach. I’ve seen a lot of videos and news about how the data centers have impacted the communities surrounding them, especially when it comes to their water and power bills, and seeing one possibly moving in so close to home isn’t welcomed news.

    I think this article does a good job of looking at a specific community, with a number of similarities to Saline, and seeing how it is impacting the people who live there as a way to look into our possible future. It goes beyond just the logistic aspects like the power and water, and examines the things like the emotional and social impact from the industrialized landscape and increased traffic from the construction.

  • AI in the Classroom: A Teacher’s Perspective on Academic Integrity

    Boulanger, Lauren. “AI Has Done Far More Harm Than Good in My Classroom.” Education Week 7 August 2025. https://www.edweek.org/technology/opinion-ai-has-done-far-more-harm-than-good-in-my-classroom/2025/08 Accessed 19 February 2026.

    This article focuses on a high school English teacher who believes AI has done more harm than good in her classroom. Although school administrators are excited about using AI, she explains that many students mainly use it to cheat. She has seen numerous AI-generated essays being submitted as original work. She also points out how difficult it is to prove when students use AI. Even when teachers check revision history, students sometimes find ways to make it appear as though they wrote the work themselves. As a result, this situation has created stress and a sense of distrust in the classroom environment.

    I chose this article because it shows a real-life perspective from a teacher who is directly affected by AI use in schools. It shows concerns about academic honesty and the importance of the writing process in helping students think and grow. This source is important for my project because it presents the challenges of AI in education and helps me understand the negative side of the issue, which will allow me to build a more balanced argument.