This week’s Grounding is a Special Edition on artificial intelligence. Weekly Grounding #43, the first Special Edition Grounding on AI, can be found here. You can read other Special Edition Weekly Groundings here, here, here, here, here, and here).
For those of you who are new to Handful of Earth, Weekly Groundings are published every Friday to highlight the most interesting news, links, and writing I investigated during the past week. They are designed to ground your thinking in the midst of media overload and contribute to Handful of Earth’s broader framework. Please subscribe if you’d like to receive these posts directly in your inbox.
If you’re already subscribed and want to help the publication grow, consider sharing Handful of Earth with a friend.
“AI and Working-Class Jobs”
At
, speculates about the impacts of generative AI on working-class jobs in the United States.He writes that “AI is particularly effective at automating cognitive procedural skills. Just as robots replaced the repetitive movements of human arms on assembly lines, algorithms can now replicate the repetitive operations of the human mind. Rule following, pattern matching, and algorithmic reasoning, which were once central to much middle-skill office work, are increasingly within AI’s domain. This is similar to the impact of earlier waves of computing, but it extends much further by enabling systems to take on a wider range of human cognitive procedural tasks.”
Martsolf argues, among other things, that “many working-class jobs in the trades, manufacturing, and health care require extensive physical skills and tacit knowledge that generative AI cannot replicate. In contrast, it is the easy-to-learn cognitive procedural tasks—common in entry-level corporate jobs—that still require a college degree and remain most vulnerable to automation.”
“Hundreds of Google AI Workers Were Fired Amid Fight Over Working Conditions”
Wired reports that “More than 200 contractors who worked on evaluating and improving Google’s AI products have been laid off without warning in at least two rounds of layoffs last month. The move comes amid an ongoing fight over pay and working conditions…”
The article notes that “In the past few years, Google has outsourced its AI rating work—which includes evaluating, editing, or rewriting the Gemini chatbot’s response to make it sound more human and ‘intelligent’—to thousands of contractors employed by Hitachi-owned GlobalLogic and other outsourcing companies. Most raters working at GlobalLogic are based in the US and deal with English-language content. Just as content moderators help purge and classify content on social media, these workers use their expertise, skill, and judgment to teach chatbots and other AI products, including Google’s search summaries feature called AI Overviews—the right responses on a wide range of subjects. Workers allege that the latest cuts come amid attempts to quash their protests over issues including pay and job insecurity. These workers, who often are hired because of their specialist knowledge, had to have either a master’s or a PhD to join the super rater program, and typically include writers, teachers, and people from creative fields.”
The article discusses the time discipline imposed on these AI raters: “Alex, along with several other workers, was pulled into a project a few months prior which she initially thought would lead to promotions. But, instead it led to intensified workplace stress. Alex says that in this project, their task timers were set at five minutes, raising concerns amongst her and her coworkers that they are ‘sacrificing quality at this point.’ ‘I don't even keep count of how many I do in a day,’ says Alex. ‘I just focus more on the timer than anything else—it’s gone from mentally stimulating work to mind-numbing.’ She added that she often does not reach that metric of completing each task within five minutes and that the company has been ‘threatening many of us with losing our job or the project in general if we don't get these numbers down.’”
“Workers still at the company claim they are increasingly concerned that they are being set up to replace themselves,” the article continues. “According to internal documents viewed by WIRED, GlobalLogic seems to be using these human raters to train the Google AI system that could automatically rate the responses, with the aim of replacing them with AI…Those that remain working at GlobalLogic say they are afraid to speak up because they may lose their jobs. ‘It’s just been kind of [an] oppressive atmosphere,’ says Alex. ‘We can’t really organize—we're afraid that if we talk we’re going to get fired or laid off.’”
“Sam Altman on God, Elon Musk and the Mysterious Death of His Former Employee”
In this remarkable interview on Tucker Carlson’s podcast, OpenAI CEO, Sam Altman, responds to questions about his roles and responsibilities in relation to the globalization of ChatGPT. What emerges is a portrait of a man flying by the seat of his pants while simultaneously controlling and promoting the most transformative technology in the world today. The hour-long interview is worth listening to in full.
“How Chatbots Are Changing the Internet”
At The Financial Times, John Thornhill writes that, “On the internet, it has become almost impossible to know whether you are interacting with a real human or a synthetic one powered by an AI-enabled chatbot, such as OpenAI’s ChatGPT. We increasingly live in an online world in which no one knows you’re a bot.”
“Bots are already helping millions of users around the world do myriad useful things: answering customer enquiries in real time; drafting marketing emails in foreign languages; enabling novice programmers to ‘vibe code’ websites; helping teachers write lesson plans and students research (and write) essays; creating lifelike avatars to read out cricket scores in India’s 22 official languages. We can all see and benefit from the upsides of this technology. But we also need to reckon with some of its more insidious downsides.”
As an example, “AI tools are giving criminals new superpowers and increasing our own vulnerabilities as we increasingly rely on billions of AI-powered connected devices. Some security experts warn we are rapidly entering an online world in which our default assumption must be reversed: we will have to view every digital interlocutor as a counterfeit human, unless proved otherwise.”
Andrew Bud, the founder of an online authentication company called iProov, believes that generative AI may be “the biggest setback to human understanding since the Enlightenment.” “The generative AI models that run the most commonly used chatbots are probabilistic machines that generate the most plausible, not the most accurate, answer,” the Thronhill notes. “They operate by statistical correlation, rather than empirical observation. These black box systems also exhibit a well-known tendency to hallucinate—or confabulate—facts. ‘The Enlightenment was all about explainability and rationality. AI has an explainability gap that takes us back to the era of superstition,’ Bud says.”
“Petrified Factuality”
At
, writes that “Recent research (reported on by The Wall Street Journal)…suggests that ‘lower artificial intelligence literacy predicts greater AI receptivity’—i.e. the more one understands about what AI is, the less likely one is to use it. But an array of forces are aligned against allowing it to be understood, much as capitalism has always been enshrouded in various mystifications and ideological misrecognitions. The Wall Street Journal article quotes a business professor who advocates ‘calibrated literacy’ toward ‘AI’: people should be taught just enough about it to find it magical and ‘delightful’ but not so much that they see what it actually is: algorithmic pattern matching. In other words, to get people to use ‘AI,’ they must be taught to love their own ignorance as a kind of enabling magic. (Isn’t it better and more ‘delightful’ to believe that the sun is carried across the sky by gods driving a celestial chariot than to develop the science of astronomy?).”Horning argues that generative AI “show[s] how data are ‘conjoined’—how words and concepts have historically fit together—while leaving them unchanged and unexplained, attempting to convince users that they are all ultimately unchangeable. But as we bear the costs of AI and the damage it wreaks—as we resist rather than resign ourselves to the deskilling and desocializing it works to impose—our consciousness of what must be done, of what forms our resistance can practically and efficaciously take, comes into sharper focus. Thinking rather than prompting; collaborating with other people and socializing rather than withdrawing into nonreciprocal machine chat—'these become clarified as sources of strength and means of de-reification. Of course, this means they will continue to be under constant ideological attack. Capitalism has to produce ignorance and apathy to perpetuate itself; ‘AI’ is merely the latest means of production.”
“Virtual Intelligence”
Writing for his eponymous Substack,
contends that, “Through AV recording technology, and even more through generative AI, we learn a habit of distancing ourselves from what we see and hear. These are among the senses that establish our presence in the world. No wonder so many feel so lost here.”“The person who lives in an environment of ubiquitous deceit learns not to trust anything,” Eisenstein continues. “This has dire political and psychological consequences. A serious political consequence is that we no longer trust photographic or video evidence of crimes against humanity. That distrust endows the crimes with a shield that allows them to proceed in full view of the public. Automatically, we discount whatever we see on screen, knowing on some level that it isn’t real—in the sense that there is no kitten cavorting right there; that whatever we are seeing isn’t happening right now. (Or, in the case of computer-generated images, happening at all.) We have, in other words, grown inured to whatever the screen is telling us.”
He elaborates: “That habit originates quite sensibly, since most of the violence and drama we witness on screens is indeed unreal. If took all those TV gun battles and car chases as real, they would fry our nerves. So we discount them—discounting along with them images and stories that are real. The eye and ear cannot easily distinguish which is which. They all present the same. That habit of discounting digitally transmitted information makes the public relatively unresponsive to horrifying events. It has been habituated to assume, unconsciously, that this isn’t really happening. Immersion in a world of virtual sounds and images induces feelings of alienation and loneliness. When we see and hear things that are not there, a dreadful ‘de-realization’ ensues, in which one wonders, ‘Maybe I am not really here either.’ It isn’t usually an explicit thought, it is a feeling, a sense of phoniness and meaninglessness, of living in a simulation. Naturally, we stop giving a shit about what happens to something that isn’t real anyway.”
Eisenstein proceeds to reflect on the “post-modern” character of AI: “Post-modernism, especially in its post-structuralist variants, holds that meaning is not anchored in any stable reality but arises through differential relations among signs. In this framework the signifier takes precedence over the signified: language does not transparently point to an underlying world but endlessly refers to itself. That is very much how an LLM learns language—it derives meaning not from an experience of an underlying reality but by studying ‘differential relations among signs.’ It does not anchor language in any direct experience of an underlying world, but uses language based solely on how language is used. If one accepts the basic premises of post-modernism, then there is ultimately little difference ‘under the hood’ between human and machine language use. In that case, virtual intelligence = real intelligence. There is something very post-modern about the AI takeover. Post-modernism’s detachment of meaning from a material substrate is conceptual; artificial intelligence makes it real. It ushers us into a world where indeed, language endlessly refers to itself.”
What grounded your thinking this week? Share in the comments.