Thursday, April 16, 2026
Home / Entertainment / The ‘Techlash’ Against AI Is Here. Have We Hit a T...
Entertainment

The ‘Techlash’ Against AI Is Here. Have We Hit a Tipping Point?

CN
CitrixNews Staff
·
The ‘Techlash’ Against AI Is Here. Have We Hit a Tipping Point?

By Lorena O’Neil

Lorena O’Neil

View all posts by Lorena O’Neil April 16, 2026 WASHINGTON, DC - MARCH 11: OpenAI CEO Sam Altman speaks during the BlackRock Infrastructure Summit on March 11, 2026 in Washington, DC. The global investment management company held the summit consisting of leaders from government, business, and labor to address expanding U.S. infrastructure. (Photo by Anna Moneymaker/Getty Images) OpenAI CEO Sam Altman speaks during the BlackRock Infrastructure Summit on March 11, 2026 in Washington, DC. Anna Moneymaker/Getty Images

Last Monday, 20-year-old Daniel Moreno-Gama was charged with attempted murder and arson after allegedly throwing a Molotov cocktail at the San Francisco home OpenAI CEO Sam Altman shares with his husband and one-year-old. Authorities say that after launching the explosive device, the Texas man then traveled to OpenAI’s offices and threw a chair at the building’s glass doors, threatening to burn the building down and kill anyone inside. He was arrested while holding a jug of kerosene. According to court documents, he’d written about AI’s existential risk to humanity’s “impending extinction,” and authorities say they found a document on him listing other AI companies as targets. (His attorney has said he was experiencing a mental health crisis.)

Two days later, two suspects were arrested after allegedly firing a gun near the CEO’s property. Earlier this month, someone fired 13 shots at the front door of an Indiana councilman, and left behind a note that read “No Data Centers” on his doorstep. 

Outspoken critics of Altman have condemned the violence and expressed their sympathy. Alex Bores, an ex-Palantir employee turned pro-AI regulation congressional candidate, called the Molotov attack “unwarranted and unacceptable.” “Sam and I may disagree on many things,” he said, “but we are all human and we cannot allow ourselves to lose the humanity at the heart of the debate over the future of AI safety.” 

But on social media, some not only eschewed empathy, but celebrated the attacks. Commenters asked how they could support Moreno-Gama’s bail fund, while others joked they hoped the Molotov cocktail was okay. “I care about Sam Altman’s humanity as much as he cares about mine,” wrote one X user. “Trying to stop the AI apocalypse is a heroic action, not a criminal one,” wrote another. “The criminals are the AI CEOs who want to kill humanity & replace us with robots.” 

The celebratory reaction to the rash of violent acts reflects the public’s growing anger and resentment against AI companies, data centers, and tech billionaires. AI experts who have been sounding the alarm within the tech community about the harms of building AI without guardrails see the public’s reaction as an escalation of the mistrust toward AI that has been simmering for years. Stanford’s 2026 AI Index Report found that 64 percent of U.S. adults think AI will lead to fewer jobs. Fifty-two percent say that products and services using AI make them nervous, and 79 percent say these products and services should disclose AI use. 

Editor’s picks

The 250 Greatest Albums of the 21st Century So Far

The 100 Best TV Episodes of All Time

The 500 Greatest Albums of All Time

100 Best Movies of the 21st Century

Digital media professor Safiya Noble, author of the book Algorithms of Oppression, says, “We are solidly in the ‘techlash,’ which is the backlash against the tech sector and tech billionaires, who are obsessed and preoccupied with their sci-fi fantasies of the future.” 

THE PUBLIC’S NEGATIVE PERCEPTION of artificial intelligence has been building for years, says Alondra Nelson, who previously led the Biden administration’s Office of Science and Technology Policy.

“The negative sentiment around AI has been growing steadily, and what’s changed is that the public has developed both the vocabulary and the lived experience to name what’s bothering them,” Nelson says. “Energy costs, job displacement, discrimination, the concentration of power in a handful of companies, harm to our young people and a profound sense of a lack of agency and empowerment in the face of all of this.”

As AI companies have grown, so have their plans to build massive data centers across the country. The facilities, which use enormous amounts of water and electricity and have displaced residents, particularly across the south where construction has been concentrated, have become increasingly unpopular. Maine recently passed the first statewide ban on them, and Bernie Sanders and Alexandria Ocasio-Cortez have introduced legislation to put a stop to them until more regulation is in place.

Related Content

An AI-Generated Val Kilmer Stars in Unsettling ‘As Deep as the Grave’ Trailer

Bernie Sanders and AOC Want to Pump the Brakes on AI Development

Inside the Don’t Ask, Don’t Tell Era of AI in Music

OpenAI Pulls the Plug on Sora Just Months After Launch

“Data centers are the physical manifestation of AI infrastructure, and they’ve become a flashpoint precisely because they’re tractable,” Nelson says. “They exist in specific places, they consume specific resources, they can be seen and pointed to.”

While data centers represent tangible ways in which AI is changing our lives, the more abstract and ambiguous fears are coming to a boil, as well.

“There is a growing concern, broadly,” says Suresh Venkatasubramanian, director of the Center for Tech Responsibility at Brown University. “Going back to the horrible events last summer with teenagers getting sucked into AI-fueled psychosis and committing suicide. That, together with a lot of the rhetoric around the cost savings that will come with replacing people by machines — that is being, frankly, pushed by all the tech companies — is creating a lot of fear in every sector of society.”

Noble says industry leaders declaring how AI is going to get rid of labor has sparked a significant reaction. 

“They’ve stolen all the works of humanity, the books, the art, everything we’ve ever put on Reddit, they’ve turned around and tried to monetize and sell it back to us and defund education, libraries, public health institutions,” Noble says. “People are not stupid, and there are many fronts where people have been talking about what faulty tech products do to destabilize communities and institutions, and I think we will continue to see more backlash grow.”

Venkatasubramanian points out that the reaction to the attacks on Altman’s house were eerily reminiscent of what happened following the murder of United HealthCare CEO Brian Thompson in Manhattan. Over the weekend, commenters on social media joked about volunteering mock alibis for Moreno-Gama, like they did when prime suspect Luigi Mangione was arrested in December 2024. And the Wall Street Journal reports Moreno-Gama mentioned Mangione and the United HealthCare CEO shooting in an online chat months before the Altman attack. OpenAI’s offices reportedly have a note reminding employees to hide their badges as they leave, similar to United HealthCare’s policies.

THERE’S A LONG HISTORY OF THE GENERAL public responding strongly to advances in technology, explains economist Carl Benedikt Frey, author of The Technology Trap, which focuses on the history of technological progress from the Industrial Revolution to the advent of AI.

“If a technology threatens people’s jobs and skills, which is essentially what most people derive their income from, they’re quite likely to resist it, and rightly so,” Frey says. “The Luddites” — 19th century British textile workers who destroyed automated looms — “are often portrayed as these irrational enemies of progress, but that they were not the ones who stood to benefit from mechanized factories and so their opposition made sense.” 

Frey explains that economists often say technology can make people better off in the long run, by making goods more readily available and cheaper, but people “live in the present” and if they see a threat to their jobs, it’s natural to be resistant and skeptical.

“A difference this time, relative to previous episodes of technological change, is even that the makers of the technology are actively warning about this risk,” Frey says. He points to Dario Amodei, the CEO of Anthropic, warning that AI could displace half of all entry level white collar jobs and “worst case destroy all life on Earth.” 

Frey adds that society’s resistance to automation tends to coincide with economic downturns, like during the Great Depression, or recessions in the 1960s. The war in Iran, higher interest rates, and an unstable job market could amp up anxiety. Frey adds that communities also get frustrated if they don’t see their views reflected in policies. 

“If people feel that, showing up at the ballot box, they’re not getting they’re voice heard, they may use other means to try to get the voice heard, not that I’m condoning that sort of violence,” Frey says. “One shouldn’t be surprised that if people feel that they are not likely to benefit from a technology, they are going to resist it. And if they feel that the political system is not delivering or responding to their concerns, then you’re more likely to see activism, which should preferably be nonviolent.”

That’s why AI safety experts like Venkatasubramian say it’s important for tech companies to build back trust.

“Everyone, collectively, is feeling this sense of the world is shifting around us,” Venkatasubramanian says. “We don’t know how it’s going to play out, but the people we look to, whether it’s [national] politicians or tech leaders seem to have no answers or don’t care.”

AI ethicists for years have been saying that building guardrails and protections around AI tools is like adding seatbelts to cars and lanes to highways, he points out. The regulation allows people to go fast in a safer way. 

“You don’t get to a place of trust by just convincing people to trust companies and others, you get to it by acting,” Venkatasubramanian says. “And those actions at the national level have been few and far between. The states have tried very hard to legislate, but they’re also being hampered, ironically, by the very tech companies who parachute into states and block them from doing anything to build more trust and to put up guardrails on AI.”

Trending Stories

Troubled Rock the Country Tour Is Slashing Prices, Losing Jelly Roll at One Stop

Originally reported by Rolling Stone