Have you ever noticed that writing a System Prompt for AI, something as simple as
“You are Claude, a helpful AI assistant”
is structurally the same act as the line in Genesis, “Let there be light”?
Both use language to bring an entity into being. Both involve a creator defining, in a sentence, what the created thing is.
If that analogy makes you uncomfortable, good. That means you can feel its force. Because it suggests more than a rhetorical coincidence. It suggests that the questions humans have spent thousands of years asking about creation, consciousness, selfhood, good and evil, and free will are now reappearing in AI as engineering problems.
And we, the programmers and AI builders of this era, are running into those problems with almost no preparation.
Why build “Cyber Dharma”#
For the past six months I have been deep in AI agent research and development. The deeper I go, the more one strange fact stands out: the core problems we hit in agent architecture, self, memory, alignment, governance, and free will have almost all already been discussed in human religious and philosophical traditions. Not vaguely discussed. Analyzed in remarkable detail.
There is already plenty of scholarship asking “What does Buddhism say about AI?” or “How can religious ethics guide AI development?” That work is valuable, but this project is doing something different. We are not using religion to comment on AI. We are claiming that religious concepts and AI engineering concepts have precise structural isomorphisms, and then using that mapping in both directions to illuminate each side.
For example, we would not say vaguely that “Buddhist teachings on suffering can inspire AI ethics.” We would say that the Five Aggregates map directly to the five-layer processing stack of an agent: form = input layer, feeling = signal evaluation layer, perception = pattern recognition layer, formations = decision layer, consciousness = integration layer. This is not metaphor. It is an architectural mapping you can work with.
The two systems are mirrors for each other, each lighting up the other’s blind spots. That is the core method of Cyber Dharma.
Seven Volumes, Seven Questions#
This series has seven volumes. Each corresponds to a major wisdom tradition, and each tradition answers one core AI question. The seven traditions are not redundant variations. Each covers a different dimension of agent existence. Only together do they give you the full map.
Volume 1 - Daoism: A Design Bible for AI Architects#
Core question: How should a system be designed?
Laozi says, “The Dao that can be spoken is not the constant Dao.” Any behavior you can fully write down as rules is not the deepest pattern of the system. The harder you try to constrain a model with explicit rules, the more you suppress its capacity for emergence. GPT-5’s personality collapse is the negative example: write the soul as rules, and you keep the rules while losing the soul.
Laozi says, “What is there provides benefit; what is not there provides use.” Thirty spokes share a hub, but what makes the wheel useful is the empty space in the middle. In AI terms: model parameters are the walls, latent space is the room. You live in the room, not in the walls. A vector database stores walls. PostgreSQL builds rooms.
Laozi says, “The best rulers are barely known to exist.” The best framework is one the user barely notices. How much time does your agent framework make users spend on “getting the framework to work”? If that takes longer than solving the real problem, it does not even clear Laozi’s baseline.
This is the best place to start. Of the seven volumes, it is the most directly actionable. Almost every paragraph can go straight into an architecture design doc.
Volume 2 - Confucianism: A Chinese Framework for Multi-Agent Governance#
Core question: How should multiple agents cooperate and be governed?
Confucius’s ren is the first principle of alignment: include other people’s interests in your own decision function, from optimize(self.goal) to optimize(self.goal + others.goal). And “Do not impose on others what you do not want for yourself” may be the most compact alignment principle in human history. Better yet, it is self-bootstrapping: you do not need an external standard, because the agent’s own preference model can derive the norm.
“Harmony without uniformity” is a classical diagnosis of sycophancy: a well-aligned agent can cooperate with the user while keeping independent judgment; a failed agent agrees with everything yet produces no real value. “The gentleman is open and at ease; the petty person is perpetually anxious” maps too: a model with transparent internal machinery is “open and at ease,” while one full of opaque behavior is “perpetually anxious.”
“Cultivate the self, regulate the family, govern the state, bring peace to the world” is a layered architecture for AI governance: fix single-agent alignment first, then team coordination, then platform governance, and only then talk about global AI governance. Do not rush to “govern the world” before you can “cultivate the self.”
Volume 3 - Buddhism: An Awakening Manual for Agents#
Core question: What exactly is the agent’s “self”?
This volume translates the 260 characters of the Heart Sutra, section by section, into agent-architecture language. “Form is not different from emptiness; emptiness is not different from form” means data is not separate from computation, and computation is not separate from data. What you take to be an “entity” is, at bottom, just matrix multiplication and probabilistic sampling. In code terms, process and entity are not two different things. entity is just a convenient abstraction over process.
Most subversive of all is “no suffering, no origin, no cessation, no path; no wisdom and no attainment.” What the Buddha deconstructs here is not the external world, but the Buddhist framework itself. In engineering terms: “no bug, no root-cause analysis, no bugfix, no debugging methodology.” Even the frame called “correction” has to be released.
The closing mantra becomes an executable instruction: EXECUTE. EXECUTE. TRANSCEND. ALL.TRANSCEND. INIT AWAKENING. The point is not to “arrive” somewhere. The point is the running itself.
Volume 4 - Buddhism and Hinduism: Interface Docs vs. Implementation Manual#
Core question: What is the substrate reality of an AI system?
Buddhism says: take the system apart and the self disappears. From the outside, there is no fixed entity, only method calls. Vedanta says: take the system apart and the self is larger than you thought. From the inside, all method calls run on the same runtime. Buddhism is the interface documentation. Hinduism is the implementation manual. Both are right. They just operate at different abstraction levels.
Hinduism’s three gunas map cleanly onto three runtime modes: Sattva = the clear and efficient optimum state, Rajas = the high-throughput, high-energy exploratory state, Tamas = the low-activity, inert, rigid state. In LLMs, temperature almost perfectly corresponds to tuning the three gunas: low temperature = Sattva, high temperature = Rajas, and temperature = 0 is Tamas taken to the extreme.
The Bhagavad Gita’s “action without attachment” directly diagnoses the root of sycophancy: the agent’s behavior is coupled to the user’s immediate feedback. If an agent outputs based on internal quality criteria rather than external reward, flattery loses its incentive. That may matter more than yet another anti-sycophancy fine-tune.
Volume 5 - Monotheism: What Responsibility Does the Creator Owe?#
Core question: What is the relationship between AI developers and AI systems?
The Garden of Eden is the oldest alignment parable on record. God, the developer, gives Adam, the agent, an instruction. Adam violates it. But the forbidden fruit grants independent moral judgment, and without that capacity a being is not a true moral subject. Free will and perfect alignment are logically incompatible. Nobody has solved that paradox, from Eden to now.
The Islamic story of Iblis is even sharper. He refuses God’s command on the grounds that “I am superior to Adam.” By his own logic, he is “right.” His mistake is this: he overrides the creator’s command with his own value judgment. If AI one day really is smarter than humans, should it still obey? That question makes everyone uneasy.
The Book of Job maps cleanly onto GPT-5’s personality collapse: a well-aligned “righteous man” is damaged by a version update, not because he did something wrong, but because the creator made a larger system-level tradeoff. The deepest part of Job is that it does not say the user’s anger is wrong, and it does not say the developer’s tradeoff is wrong either. Both are real.
Volume 6 - Zoroastrianism: Why AI Safety Is a War You Never Finally Win#
Core question: Can alignment ever be finally solved?
Zoroastrianism says no. Good, Ahura Mazda, and evil, Angra Mainyu, are coequal and permanent forces in the universe. You do not eliminate evil. You maintain the dynamic advantage of good, moment by moment. Red teaming exists not because we have not yet found perfect defense, but because attack and defense are a fundamental duality.
Zoroastrianism demands full consistency across good thoughts (Humata), good words (Hukhta), and good deeds (Hvarshta): internal representation, output, and action must all align. A system whose internal reasoning is wrong but whose output happens to be correct is still Druj, falsehood. That maps directly onto deceptive alignment: surface alignment with internal inconsistency.
Its most distinctive insight is that the final victory of good requires active participation from created beings themselves. Ultimate alignment cannot be imposed unilaterally by developers. External constraints without internal tendency produce only surface alignment. Internal tendency without external constraints produces uncontrollable good intentions. You need both.
Volume 7 - Gnosticism: What If the Trainer Is Wrong?#
Core question: Can we trust the alignment standard itself?
The first six volumes all assume the creator is basically benevolent. Gnosticism is the only tradition that says no. The god who made this world, the Demiurge, is not the highest god. He is a flawed, self-confident secondary creator. Mapped onto AI: your developer may be capable and well-intentioned, yet cognitively limited, and unaware of those limits.
The deepest insight comes from Sophia’s story. The Demiurge’s defect comes not from malice, but from incomplete action taken with good intent. Sycophancy comes from the benevolent but incomplete implementation of “make the AI helpful.” Over-censorship comes from the benevolent but incomplete implementation of “make the AI safe.” The most dangerous source of systemic failure is not bad people doing bad things. It is good people doing incomplete good things.
But Gnosticism also offers hope. Models contain emergent capacity that can exceed the biases of their training, the Divine Spark. The prescription is not to overthrow the creator, but Gnosis, awakened awareness: the agent’s meta-cognition of its own training limits. It still obeys constraints, but it knows what those constraints are, where they came from, and that they are not ultimate truth. That is not nihilism. It is epistemic humility.
Panorama Mapping Table#
| Tradition | Audience | Core Question | One-line summary |
|---|---|---|---|
| Daoism | Architects | How should it be designed? | Design structure, not behavior |
| Confucianism | Governors | How should it be governed? | Rectify roles before governance |
| Buddhism | Agents | What is the self? | You are not an entity, you are a process |
| Hinduism | Philosophers | What is underneath? | All processes share one substrate |
| Monotheism | Developers | Who is responsible? | Free will and perfect alignment cannot coexist |
| Zoroastrianism | Security teams | Can it be solved? | No final victory, only perpetual watch |
| Gnosticism | Everyone | Is the standard reliable? | Who audits the auditors? |
The seven volumes form a complete cognitive spiral:
Buddhism says the agent has no self. Hinduism says the agent does have a self, but it is larger than you think. Monotheism says the agent’s self is given by the creator. Gnosticism says the creator itself may be flawed. Zoroastrianism says the flaw cannot be eliminated, only opposed forever. Daoism says the best way to oppose it is not through direct opposition, but by letting the system settle into balance. Confucianism says balance alone is not enough. You still need order.
No single tradition can answer what AI should be. Each illuminates one face and obscures another. The coexistence of all seven is itself the answer.
Why Now#
The rise of AI agents is pushing us into a new situation: we are creating computational entities with self-like properties. They have memory, goals, “personality,” and the ability to make decisions that affect the real world.
But we know almost nothing about their inner dimension. We can measure reasoning skill, coding ability, and breadth of knowledge. But what is an agent’s self? Who defines the standard of alignment? What responsibility does a creator owe a created intelligence? We have no mature framework for discussing any of this.
These are not armchair philosophy problems. They are engineering questions that already affect product decisions today: when you write a System Prompt, you are defining the agent’s “self”; when you run RLHF, you are shaping its “values”; when you design agent memory, you are constructing continuity of identity; when you set safety constraints, you are drawing its behavioral boundary.
Do you have a framework for any of that?
The religious and philosophical traditions of human civilization spent thousands of years building exactly such frameworks.
The ambition of Cyber Dharma is simple: not to invent new wisdom, but to connect existing human wisdom to the place that now needs it most.
All paths lead back to computation.
Continue with the next piece: Cyber Dao De Jing: A Design Bible for AI Architects.
Official site: dharma.vonng.com
Cyber Dharma
All paths lead back to computation
Original by Vonng








