In a 28 January 2025 doctrinal document, the Vatican cautions that AI‑driven chatbots marketed as emotional companions can blur the line between simulated interaction and genuine human care, posing moral and psychological risks. The Holy See calls for clear ethical safeguards and human‑in‑the‑loop oversight to protect vulnerable users.
Vatican’s Ethical Guidelines on AI Chatbots
The 30‑page paper, titled Ethical Guidelines for the Use of Artificial Intelligence, stresses that AI must serve as a tool that complements human intelligence rather than replace its richness. It warns that chatbots lacking true empathy should never be presented as substitutes for authentic human relationships.
Key Points from the 2025 Doctrine
- AI should be used only to augment, not replace, human interaction.
- Emotionally responsive bots risk “anthropomorphising” vulnerable users, especially children.
- Reliance on chatbots for emotional care can create an “existential risk” to personal well‑being.
- The Vatican urges policymakers to develop safeguards that prevent misuse of affectionate AI systems.
Pope Francis’ Earlier AI Warnings
In his January 2025 Angelus address, Pope Francis warned that AI could become an “instrument of war” and emphasized that no machine should ever decide to take a human life. The new doctrinal statement extends that concern to the private sphere, where chatbots are increasingly promoted as mental‑health companions.
Emerging Papal Perspective Under Pope Leo XIV
Following Pope Francis, Pope Leo XIV reiterated the Vatican’s stance on 25 January 2026, describing overly affectionate AI chatbots as “digital intimacies that can erode the human capacity for authentic love and solidarity.” His remarks highlight the potential for emotional manipulation when machines simulate affection.
Why the Warning Matters for Users and Society
AI chatbots are evolving from simple Q&A tools to services that claim to understand and comfort users. While large language models can generate empathetic‑sounding responses, they lack consciousness or moral agency. The Vatican warns that this gap can lead users—particularly children or isolated individuals—to form attachments to algorithmic simulations, diminishing the perceived value of genuine human interaction and opening doors to commercial or political exploitation.
Policy and Industry Implications
The Vatican’s call for ethical safeguards aligns with global regulatory trends, such as the European Union’s AI Act, which classifies high‑risk AI systems and imposes strict transparency requirements. Although chatbots are not yet uniformly labeled as high‑risk, the emphasis on emotional manipulation may shape future regulatory definitions.
In response, several AI firms have announced plans to embed “human‑in‑the‑loop” mechanisms for any chatbot marketed as a mental‑health aid. These mechanisms require qualified professionals to review interactions before therapeutic advice is delivered, preserving human moral judgment.
Future Outlook for AI Chatbots and Ethical Safeguards
The Vatican’s 2025 doctrinal paper and subsequent papal remarks underscore a growing consensus: AI should augment, not replace, the relational fabric of human society. By flagging the specific danger of chatbots masquerading as emotional companions, the Holy See adds a moral dimension to ongoing technical and legal debates. As developers race to create ever more convincing conversational agents, the Vatican’s warning serves as a reminder that technology’s reach extends into the deepest aspects of human experience, urging policymakers, tech companies, and faith communities to translate moral concerns into concrete safeguards.
