Weekly Grounding #125
Special edition on AI, Part 3
This week’s Grounding is a Special Edition on artificial intelligence. Two earlier special edition Weekly Groundings on AI, Weekly Grounding #43 and Weekly Grounding #112, can be found here and here. This will be the last Weekly Grounding of 2025. Next Friday I will publish my favorite items from Weekly Groundings throughout the year, so look for that a week from today!
For those of you who are new here, Weekly Groundings are published every Friday to highlight the most interesting news, links, and writing I investigated during the past week. They are designed to ground your thinking in the midst of media overload and contribute to Handful of Earth’s broader framework. Please subscribe if you’d like to receive these posts directly in your inbox.
If you’re already subscribed and want to help the publication grow, consider sharing Handful of Earth with a friend.
“Trump Is Choosing the Broligarchs Over His Base”
At The Financial Times, Edward Luce writes that, “Despite having bitterly fallen out earlier this year, Musk and Trump are fated to be close. As America’s chief broligarch, Musk is too shiny for Trump to ignore for long. Musk may flirt with a third party, denounce Trump’s fiscal recklessness and even claim that Trump has personal reasons to suppress the Epstein files, but the prodigal son can always find a way back. They have too many common enemies. The same is true of Trump and the rest of the broligarchy. When historians assess this age of American populism, Silicon Valley’s plutocrats will surely be judged its winners.”
“Trump’s blue-collar base seems to be cottoning on,” he continues. “Though he now says he plans to revive them, the US president has pretty much ceased to hold Maga rallies. Yet hardly a day goes by when he is not cloistered with one of his Silicon Valley allies. In addition to Musk, David Sacks, the White House AI tsar, and Jensen Huang, CEO of Nvidia, are rarely far from the Oval Office. They get what they want. Trump plans to issue an executive order banning America’s 50 states from regulating AI. There should be one national rule and nothing more, he says. Since there is no prospect of serious federal regulations, AI companies will continue to have carte blanche.”
Luce observes that “The AI gold rush has sustained US growth, almost half of which this year has come from powering the LLMs. But it is not going over well with the average voter. Deep suspicion of AI is one of the few issues that unites Republican and Democratic voters. Some Americans correctly blame rising electricity bills on the impact of AI’s energy-guzzling data centres. Many fear AI will rob them of jobs and income…The cost of Trump’s capture by Silicon Valley shows up in his declining approval ratings…A growing number of Republicans now feel able to stand up to him. Marjorie Taylor Greene’s retirement from Congress is one way of avoiding the electoral freight train that she sees coming. Her move was also a bid for the future of Maga, which will increasingly pit the base against the broligarchs.”
“Your Country Is For Sale”
Leighton Woodhouse writes at Social Studies: “I’ve been reporting on artificial intelligence for a few months now and I still can’t tell you who actually wants this technology other than the people who will profit from it. If it delivers on its promise, it will destroy all our jobs. If it overdelivers, it could drive the human species to extinction. And if it fails to deliver, it will pop a bubble so big it could drive the entire global economy into recession. None of these are good outcomes for regular people.
“Yet Trump, who campaigned on bringing back industrial jobs and fighting the elites, is championing this job-killing technology on behalf of tech billionaires. This week, he will issue an executive order restricting state-level regulation of AI. And he is allowing China to buy microprocessors from Nvidia even more advanced than the ones on which he already lifted export controls.”
Woodhouse argues that “Much the same self-servingly sanguine thinking was behind the economic theory of ‘constructive engagement’ with China in the 1990s, which led to the lifting of tariffs and sanctions and the normalizing of Chinese global trade relations. That in turn led to the decimation of American industry, the hollowing out of our manufacturing towns, the immiseration of much of the working class, and a bonanza for multinational corporations and their investors. Those were the conditions that fueled Trump’s rise and that Trump promised to rectify. But instead of fixing them he’s just kickstarting another cycle of American job destruction on behalf of the investor class and to the benefit of China.”
“AI Hackers Are Coming Dangerously Close to Beating Humans”
The Wall Street Journal reports on increasing AI hacking capabilities: “After years of misfires, artificial-intelligence hacking tools have become dangerously good. So good that they are even surpassing some human hackers, according to a novel experiment conducted recently at Stanford University.”
The article reports that “A Stanford team spent a good chunk of the past year tinkering with an AI bot called Artemis. It takes a similar approach to Chinese hackers who had been using Anthropic’s generative AI software to break into major corporations and foreign governments. Artemis scans the network, finds potential bugs—software vulnerabilities—and then finds ways to exploit them. Then the Stanford researchers let Artemis out of the lab, using it to find bugs in a real-world computer network—the one used by Stanford’s own engineering department. And to make things interesting, they pitted Artemis against real-world professional hackers, known as penetration testers.”
Stanford professors appear excited about the prospect of outsourcing cybersecurity to AI to guard against (wait for it…) AI: “With so much of the world’s code largely untested for security flaws, tools like Artemis will be a long-term boon to defenders of the world’s networks, helping them find and then patch more code than ever before, said Dan Boneh, a computer science professor at Stanford who advised the researchers. But in the short term, ‘We might have a problem, Boneh said. ‘There’s already a lot of software out there that has not been vetted via LLMs before it was shipped. That software could be at risk of LLMs finding novel exploits.’”
“Technology Is Already ‘Out of Control’”
Patrick Jordan Anderson reviews Eliezer Yudkowsky and Nate Soares’ recent book, If Anyone Builds It, Everyone Dies, at Ever Not Quite: He writes that “The title provides a better description of the book’s intention than of its tone. Strange as it may sound for a book of such apocalyptic premonitions, it reads less like a work of frantic doomsaying than a patient and sober-minded argument in favor of the view that everyone on earth is likely to soon be killed, directly or indirectly, by a rogue machine superintelligence. Simply put, its co-authors believe that we will soon pass a point, if we haven’t done so already, at which an AI system begins to devise and carry out a plan in pursuit of some inscrutable goal of which we will likely never know, and which is incompatible with the continuation of human life on earth—and by the time we see what is happening, it will already be much too late to stop it.”
Anderson notes that “Yudkowsky and Soares do not pretend to offer an account of the sociological forces responsible for leading us to this pass; their goal is to freak you out so you’ll demand that artificial superintelligence not be built by anyone, on the remote possibility that these forces could somehow be resisted. But if you want to understand what has brought us to the point where we risk so perilous an encounter with Yudkowsky’s Demon, you could do worse than to look to [Jacques] Ellul, who demonstrates how warnings of this kind pick up a much longer drama only as the curtain is coming down on the technological civilization which has staged the entire play…This is all to say that, if Jacques Ellul had the opportunity to read If Anyone Builds It, Everyone Dies, I suspect he would be quick to point out that the sociological reality which pre-exists the development of AI and is responsible for building it in the first place is itself already out of our control.”
Anderson contends that “AI is perhaps the clearest contemporary example we have of the relentless pursuit of technical efficiency, unhindered by common sense or sound social and political discernment. But if not for the autonomy of technique, preventing its most destructive consequences would be a straightforward matter; simply recognizing the pathological drift of its current development or the risks associated with artificial superintelligence would be sufficient for other sources of guidance to prevail. The fact that these periodic open letters signed by various influential people—notably, the ‘Pause’ letter from 2023 as well as the Statement on Superintelligence from earlier this month—predictably lead to no change in the industry’s velocity suggests the degree to which the technical phenomenon has extricated itself from any recognizably human purpose. The existential danger of superintelligence and our collective inability to actually do anything about it thus share a common source: autonomous technique, which is both the means of its creation and also the technical and economic milieu which renders it practically impossible to change course.”
For more on Yudkowsky, see my 2023 essay, “Telos or Transhumanism?”
“Ownership of the Means of Thinking”
At Archedelia, Matthew B. Crawford writes that “the business rationale for AI rests on the hope that it will substitute for human judgment and discretion. Given the role of big data in training AI systems, and the enormous concentrations of capital they require to develop, the AI revolution will extend the logic of oligopoly into cognition. What appears to be at stake, ultimately, is ownership of the means of thinking. This will have implications for class structure, for the legitimacy of institutions that claim authority based on expertise, and for the credentialing function of universities.”
He argues that, “With the internet of things, and more broadly the layering of networked computers into every interaction, the function of almost anything, or the availability of any service, can be made contingent on the provider and the customer keeping a good relationship, as your psychotic girlfriend used to say—subject to terms of service set unilaterally, revocable at will. ‘You will own nothing and be happy,’ as the saying has it. As the Substacker A Z Mackay put it, ‘Power flows through the architecture of what’s possible, and if you don’t control the architecture, you rent access to possibility itself.’”
“Universities…serve to coordinate corporations with state purposes, and to craft a citizenry that has internalized the ideas that underwrite both,” Crawford continues. “Presumably these functions will still need to be carried out even as the ostensible mission of (real, substantive) education loses its economic rationale due to the widespread uptake of AI. But without that publicly affirmable mission, sincerely executed, it is not clear how universities can continue to sell their product. Nobody wants to be a tax cow who spends $80k per year merely to get socialized as a regime loyalist. Especially if that regime is collapsing. The problems sketched above may be peculiar to the United States. But the AI revolution is also likely to usher in a political form that transcends the nation-state.”
AI is Destroying the University and Learning Itself
Ronald Purser offers an extended reflection on AI and the university at Current Affairs: “I used to think that the hype surrounding artificial intelligence was just that—hype. I was skeptical when ChatGPT made its debut. The media frenzy, the breathless proclamations of a new era—it all felt familiar. I assumed it would blow over like every tech fad before it. I was wrong. But not in the way you might think.”
For faculty who were initially concerned about ChatGPT-assisted plagiarism, “hand-wringing turned into hand-rubbing. The same professors forecasting academic doom were now giddily rebranding themselves as ‘AI-ready educators.’ Across campus, workshops like ‘Building AI Skills and Knowledge in the Classroom’ and ‘AI Literacy Essentials’ popped up like mushrooms after rain. The initial panic about plagiarism gave way to a resigned embrace: ‘If you can’t beat ‘em, join ‘em.’”
Purser continues: “When my business school colleagues insist that ChatGPT is ‘just another tool in the toolbox,’ I’m tempted to remind them that Facebook was once ‘just a way to connect with friends.’ But there’s a difference between tools and technologies. Tools help us accomplish tasks; technologies reshape the very environments in which we think, work, and relate. As philosopher Peter Hershock observes, we don’t merely use technologies; we participate in them. With tools, we retain agency—we can choose when and how to use them. With technologies, the choice is subtler: they remake the conditions of choice itself. A pen extends communication without redefining it; social media transformed what we mean by privacy, friendship, even truth…Political theorist Langdon Winner once asked whether artifacts can have politics. They can, and AI systems are no exception. They encode assumptions about what counts as intelligence and whose labor counts as valuable. The more we rely on algorithms, the more we normalize their values: automation, prediction, standardization, and corporate dependency. Eventually these priorities fade from view and come to seem natural—‘just the way things are.’”
“In classrooms today, the technopoly is thriving,” writes Purser. “Universities are being retrofitted as fulfillment centers of cognitive convenience. Students aren’t being taught to think more deeply but to prompt more effectively. We are exporting the very labor of teaching and learning—the slow work of wrestling with ideas, the enduring of discomfort, doubt and confusion, the struggle of finding one’s own voice. Critical pedagogy is out; productivity hacks are in. What’s sold as innovation is really surrender. As the university trades its teaching mission for ‘AI-tech integration,’ it doesn’t just risk irrelevance—it risks becoming mechanically soulless. Genuine intellectual struggle has become too expensive of a value proposition. The scandal is not one of ignorance but indifference. University administrators understand exactly what’s happening, and proceed anyway. As long as enrollment numbers hold and tuition checks clear, they turn a blind eye to the learning crisis while faculty are left to manage the educational carnage in their classrooms. The future of education has already arrived, as a liquidation sale of everything that once made it matter.”
What grounded your thinking this week? Share in the comments.




The Artemis vs human pen-testers comparison is worth sitting with. When AI starts outperforming security professionals at finding exploits in real-world networks, the asymmetry becomes obvious - attackers get scalable offensive capabilities while defenders are still hiring humans one at a time. The broader piece about universities retrofitting themselves as 'fulfillment centers of cognitive convenience' tracks with what I'm seeing in corporate training too. The shift from teaching people to think to teaching them to prompt is subtle but corrosive. Purser's point about technologies vs tools is spot on - we're not just adopting AI, we're participating in systems that reshape what counts as legitimate work and thoguht. The political economy angle matters more than the technical capabilites at this stage.