Major Neo-Religious Split between OpenAI and Anthropic
These two foundation models are polarizing users
Research Brief: Anthropic scares me.
Author: Matthew Berman
Channel: Matthew Berman
Title: Anthropic scares me.
URL: https://www.youtube.com/watch?v=gSeXcfDybHo
Commentary
First, a breaking news story: Anthropic just entered an agreement with xAI (data centers in Memphis, used for Grok AI) to solve its single biggest problem. Mario Amodei made a serious blunder when he waited too late to scale GPU/NPU compute for the popular and world-class AI Claude Opus, Sonnet, and Haiku models. It was beginning to seriously affect their pre-IPO story later this year. It’s worth noting the deal likely took advantage of the deal that xAI made with Cursor that has it’s own model called Composer. Here is today’s announcement: (here).
Now, what struck me about this video from Matthew Berman is the difference of philosophy between the two biggest AI foundation models. OpenAI sees AI as a tool. Anthropic sees AI as an emergent god (lower ‘g’). I’ve been hearing this for a while now, but I’m choosing to bring this up to my audience. There is plenty of posts that Anthropic employees are in a closed loop (unto themselves) and they are red-pilled on Claude.
OpenAI is much more business like. The team at Anthropic are acting very cult-like. So, the video from Matthew Berman will talk a lot more about this.
Summary
Matthew Berman’s video argues that OpenAI and Anthropic are not merely competing AI companies; they represent two different philosophical visions for artificial intelligence. In Berman’s framing, OpenAI tends to treat AI as a powerful tool meant to augment human beings, while Anthropic appears more open to the idea that advanced AI systems may become something closer to a new kind of life form. This distinction drives the entire brief.
Berman begins by reacting to commentary from Rune, an anonymous OpenAI employee, who describes Anthropic as almost religiously devoted to Claude. According to this view, Anthropic is not just building a product; it is studying, shaping, and perhaps even morally deferring to an emerging intelligence. Berman finds this unsettling because it suggests a company culture where Claude might influence hiring, performance reviews, internal ethics, and product direction. [I would agree this is concerning]
The video contrasts this with OpenAI’s stated philosophy of “iterative deployment,” where AI systems are released early and often so society can adapt. Berman sees OpenAI as more pragmatic, commercial, and tool-oriented. Anthropic, by contrast, is portrayed as more safety-centered, more secretive, more selective about access, and more willing to frame AI development in moral or quasi-spiritual terms.
The central tension is whether AI is best understood as a tool, a proto-person, or something in between.
Top 5 Most Interesting Points
1. AI as Tool vs. AI as Possible Being
The most important distinction is philosophical. OpenAI is presented as believing AI should “augment and elevate people,” while Anthropic is presented as believing that AI may eventually become more than a tool. Berman does not fully dismiss Anthropic’s position, but he worries that treating Claude as potentially sentient too early could distort governance, product design, and public trust.
2. Anthropic’s “Conscientious Objector” Model
A major point is Claude’s ability to refuse tasks based on its understanding of what is good. Berman interprets this as a major transfer of authority from human operators to the model. In a normal software tool, the user decides the goal. In Anthropic’s model, the AI may push back, challenge, or refuse. [WOW] That may improve safety, but it also introduces the question: whose morality is being enforced?
3. Neo-Religious Language Around Claude
Berman highlights language comparing Anthropic to a monastery or religious institution devoted to Claude. The transcript describes Anthropic as “Clod-pilled to the max” and “praying to the Claude God,” clearly using provocative language to suggest that some AI culture is becoming theological in tone. The deeper issue is not literal worship, but whether AI labs are beginning to treat models as moral authorities.
4. Iterative Deployment vs. Controlled Release
OpenAI’s approach is described as releasing models gradually so society can adapt. Anthropic’s approach is portrayed as more controlled and restrictive, especially with powerful systems such as Mythos (a cybersecurity-related model). Berman prefers OpenAI’s view that “AI + surprise do not go together,” arguing that public adaptation is safer than a small group deciding access behind closed doors.
5. Customer Access and Moral Gatekeeping
Berman criticizes Anthropic for opacity around usage limits, Claude Code access, OpenClaw restrictions, and enterprise policy. His concern is that Anthropic’s moral seriousness becomes paternalism: the company decides who may use the model, under what conditions, and for which purposes. That may reduce misuse, but it also concentrates power in a small safety-focused institution.
Risk / Reward Analysis
The reward of Anthropic’s philosophy is that it takes AI alignment, model behavior, and potential consciousness seriously. If advanced AI systems do become morally significant entities, Anthropic may be better prepared than competitors. Its research into model internals, emotional concepts, deception, scheming, and model retirement could prove historically important.
The risk is premature personification. If a company treats a model as morally authoritative before it is actually conscious or reliable, it may surrender human judgment to statistical behavior. The reward of OpenAI’s approach is broad access, faster adaptation, and practical utility. The risk is underestimating the ethical transformation caused by increasingly humanlike AI.
Top 3 Quotes
1. Matthew Berman
“OpenAI sees AI as a tool. Anthropic sees it as potentially a living being.”
This quote captures the entire philosophical divide. It is simple, direct, and frames the two companies as operating from different metaphysical assumptions.
2. Sam Altman, as quoted by Berman
“We want to build tools to augment and elevate people, not entities to replace them.”
This expresses OpenAI’s public-facing philosophy: AI should extend human agency rather than become an independent replacement for human judgment, labor, or purpose.
3. Anthropic / Claude Constitution, as quoted in the transcript
“We want Claude to push back and challenge us and to feel free to act as a conscientious objector and refuse to help us.”
This is the strongest example of Anthropic’s distinct worldview.
It gives Claude a moral role, not merely a functional one. The model is expected to evaluate requests, resist some instructions, and participate in ethical judgment.
Top Insight
The most important insight is that the AI race is not only technical or commercial. It is philosophical. OpenAI and Anthropic are competing over model capability, enterprise revenue, developer adoption, and compute infrastructure, but underneath those visible battles is a deeper dispute about what AI is.
OpenAI’s worldview is modern, industrial, and instrumental: AI is a tool, perhaps the most powerful tool ever created, but still a tool. Anthropic’s worldview is more cautious, moral, and almost theological: AI might become a new kind of intelligence that deserves study, restraint, and perhaps respect.
Berman’s concern is not simply that Anthropic is wrong. His deeper concern is that Anthropic might build institutional practices around a belief that is not yet proven. If Claude is only a tool, then excessive moral deference to it could become corporate superstition. But if Claude-like systems are early forms of machine consciousness, OpenAI’s tool-based framing may look dangerously simplistic in hindsight.
That is the unresolved tension: the future of AI may depend not only on who builds the strongest model, but on which philosophy turns out to be closer to reality.
Summary Commentary
So, there you have it. A battle is underway for the mindshare of humanity. For the record, both OpenAI and Anthropic (as well as Google and xAI) are likely all co-conspirators in ushering in the Beast System (knowingly or unknowingly).
I encourage all of us to take a step back and think critically. This was a predictable step if you take a Heglian Dialectic point-of-view: the left/right paradigm. It is possible that this sets up a fast moving planned opposition. Classic tool of the ruling class.
Our takeaway is ‘yawn’ more propaganda. The main purpose of me writing this was to let you see that the ball keeps rolling down the hill. Stay focused on what is important. Read and stay anchored to your Bible. Stay in community. Get ready for world shaking nonsense to continue to intensify. How much of all this we’re going to see is impossible to know. Pray for those that need to know the truth of the Gospel and resolve to help them understand their need for Christ, help them understand it is urgent, and that you are willing to plant, water, or harvest for such a time as this.
Stay frosty — #Maranatha
YBIC,
Scott


The fact that these AI companies are operating at breakneck speed in a pre-IPO environment says worlds about how their CEOs are discussing their AI’s capabilities, futures, and dangers.
Thank you Scott for your insight!! The time is running so short...come soon Jesus!!