The attempted firebombing of OpenAI CEO Sam Altman’s San Francisco home last Friday, allegedly carried out by 20-year-old Daniel Moreno-Gama, has drawn attention to two anti-AI groups with similar names: Pause AI and Stop AI. Both have condemned the violence and said the suspect is not and was never a member of their organizations.
Still, the incident, in which Moreno-Gama also went to OpenAI’s headquarters and tried to shatter the building’s glass doors with a chair and threatened to burn the facility, surfaced his activity on Pause AI’s Discord server and renewed scrutiny of Stop AI’s direct actions targeting OpenAI last year.
A movement built on slowing AI
Pause AI, founded in Utrecht, Netherlands in May 2023 by Joep Meindertsma, aims to halt what it calls “dangerous frontier AI” and staged its first protest outside Microsoft’s lobbying office in Brussels. The group, whose name was inspired by an open letter from the Future of Life Institute in March 2023 (which is also now its largest single funder), has since grown into a global grassroots movement with local chapters. That includes a separate organization called Pause AI US, led by the Berkeley, CA-based Holly Elmore, who has a Ph.D in evolutionary biology from Harvard and previously worked at a think tank focused on wildlife animal welfare.
Moreno-Gama was linked to comments on Pause AI’s Discord server, including one post, dated Dec. 3, 2025, that read: “We are close to midnight, it’s time to actually act.” Pause AI said the suspect joined its server two years ago and posted a total of 34 messages, none of which “contained explicit calls to violence.”

Lea Suzuki/San Francisco Chronicle via Getty Images
Elmore told Fortune that she had been on her way to Washington, DC last week to finish preparing for a peaceful demonstration on Capitol Hill and meetings with members of congress when the attempted firebombing occurred. “When I landed, suddenly I was getting these questions about somebody who had attacked Sam Altman’s house,” she said. “It’s been back and forth between working on something that I feel really proud and positive about, and it’s just exactly the right kind of change to be making democratic change through democratic means, and then having to comment on this horrible event and additionally being really smeared with a connection to this event.”
The group has “no reason to think that this person had much to do with us,” she added, pointing out that Pause AI’s stance on violence “has always been incredibly clear” and explicitly prohibits it. She also emphasized that the activity occurred on a public, global Discord server distinct from Pause AI US’s organizing channels, and said the suspect “didn’t get any further in onboarding or having any official role.”
Elmore added that Pause AI deliberately vets volunteers and keeps tight control over its messaging to avoid being associated with extreme views.
But Nirit Weiss-Blatt, an independent researcher who has long-followed the two groups and writes the newsletter AI Panic, pointed to a 2024 documentary, Near Midnight in Suicide City, in which For Humanity podcast host John Sherman interviews Holly Elmore, who holds up a sign reading, “Humanity can’t survive smarter-than-human AI.”
Weiss-Blatt said the film shows Elmore urging activists to understand what she describes as an urgent timeline toward potential human extinction. “She’s never advocating violence, but is raising the stakes about doom,” Weiss-Blatt said.
“When prominent AI doomers like Eliezer Yudkowsky—author of If Anyone Builds It, Everybody Dies—keep insisting that human extinction is imminent, it should not be surprising when someone is driven to extreme action,” she added. “Young, anxious followers, looking for purpose, can be radicalized by apocalyptic AI rhetoric, even without explicit calls for violence.”
However, Mauro Lubrano, a lecturer at the University of Bath and author of Stop the Machines: The Rise of Anti-Technology Extremism, cautioned that there is a clear distinction between groups that seek to eradicate technology violently and those advocating for regulation or a pause. “I think it’s easy to conflate all of these groups and movements that are trying to raise awareness of some of the dangers of AI,” he said.
A break over tactics—and a turn to direct action
The incident at Sam Altman’s home occurred about five months after OpenAI told employees at its headquarters to shelter in place because a 27-year-old man named Sam Kirchner threatened to go to several OpenAI offices in San Francisco to “murder people,” according to callers who notified police that day. Kirchner was a cofounder of Stop AI, a group he founded in 2024 with 45-year-old Guido Reichstadter, both of whom had previously been involved in Pause AI.

Photo by Drew Angerer/Getty Images
“I kicked them out,” said Elmore, who added the split stemmed from disagreements over tactics, with Stop AI’s founders pushing for civil disobedience that would involve breaking the law—something Pause AI explicitly rejects. After founding Stop AI, Reichstadter and Kirchner took part in protests targeting OpenAI, while Reichstadter also staged a hunger strike outside Anthropic’s headquarters (he had a long history of civil disobedience actions, including chaining himself to a security fence and climbing to the top of a Washington, DC bridge in protest against the Supreme Court’s decision on Roe v. Wade in 2022.
Reichstadter was booked into San Francisco County Jail in early December for allegedly violating a judge’s order barring him from OpenAI premises following a previous arrest. And Stop AI previously made national headlines in November when a member of its defense team served a subpoena to Sam Altman while he was onstage at San Francisco’s Sydney Goldstein Theater with Golden State Warriors head coach Steve Kerr.
But the group’s momentum unraveled after co-founder Sam Kirchner disappeared following an alleged assault on one of Stop AI leaders, Matthew Hall, during an internal dispute in which he reportedly suggested abandoning nonviolence. He is still missing.
In a post yesterday on X, Stop AI wrote that both Reichstadter and Kirchner were removed from the group in 2025. It said it “has always adhered to nonviolent activism” and that “the current leadership of Stop AI is deeply committed to non-violence in both actions and statements.”
To set the record straight about Mareno-Gama, Stop AI wrote that he had “joined the Stop AI public online forum, introduced himself, then asked, ‘Will speaking about violence get me banned?’ After he was given a firm ‘Yes’ he ceased all activities on our forum. This was several months before his alleged criminal activities.”
Valerie Sizemore, one of five co-leaders for Stop AI, told Fortune that some of its members are now feeling anxious and worried about getting too associated with the OpenAI incident. “But personally, I think it’s all the more important for the non-violent organizing we’re doing, to give people something other than violence to do,” she said.
The organization remains focused on its San Francisco-based efforts to protest at frontier lab headquarters, Sizemore added, and also participated in a local “Stop the AI Race” protest last month.
A broader debate over AI activism—and its risks
Lubrano, the University of Bath lecturer, pointed out that anti-technology activism, and anti-technology extremism, has been around for a long time – even as far back as the Luddites, the 19th century English textile workers who opposed machinery and industrialization.

JUSTIN TALLIS / AFP via Getty Images
For many, AI represents the sum of all fears when it comes to technology, he explained. “Technology is viewed as a system, and all parts are dependent on one another,” he said. “With AI being deployed in warfare, to monitor worker performance, to monitor people taking part in demonstrations or to ensure that they behave – there’s an element of this technological oligarchy wanting to control us and converging thanks to AI.”
He advised engaging with anti-AI groups rather than dismissing them as technophobes or anti-technology. “The Luddies were not against technology – they were against the unmitigated introduction of technology because it was disrupting their lives. And these concerns were not heard, and eventually the Luddites turned to violence.” Ignoring those concerns, he warned, can fuel resentment and, at the margins, lead to more extreme behavior—though it would be wrong to blame acts of violence on the mere existence of such groups.
Still, independent researcher Weiss-Blatt insisted that the views and actions of groups like Pause AI and Stop AI can still lead to radicalization, which can, in turn, lead to bad outcomes.
“The warning signs were there all along, including the November 2025 lockdown at OpenAI’s offices,” she said. “The real question is how long the people fueling AI panic expect to avoid responsibility for where that radicalization leads, especially for the most vulnerable.”
Pause AI’s Elmore said she believes public understanding of AI issues is likely to deepen, making it harder to conflate peaceful activism with isolated acts of violence. While the topic is still new and often viewed as a single, undifferentiated space, she expects it to become a major focus of national attention.
“People will see it’s not so easy to paint [all of us] with one brush,” she said.
.png)
2 hours ago
2



English (US) ·