Moral Opposition to AI
Published on May 16, 2025
Contributors
You’ve seen the mandate: “Start using AI, or get out.” The decks follow, overflowing with ROI estimates and a sprinkling of techno-optimist quotes about the inevitable future of work. Yet somewhere between the spreadsheet and the stand‑up meeting, a stubborn fact emerges: plenty of people still say no. And — contrary to the memo’s assumptions — employees’ resistance often isn’t about efficiency, readiness, or even job security. It’s about morality.
A new paper by Oldemburgo de Mello and colleagues [1] unpacks that idea. The researchers show that for many opponents, AI feels less like a tool to be evaluated and more like a taboo to be simply avoided.
Inside the moralization of AI study
So what did they find, and why does it matter? The researchers recruited 706 adults in the US across two online samples. Each participant read four short scenarios featuring AI in different roles:
Chatbots answering everyday questions
Parole algorithms recommending release decisions
Generative‑art systems creating visual artwork
Virtual companions offering friendship and emotional support
After each vignette, respondents rated their opposition on a 7‑point scale. Those who opposed a given use answered “protected‑value” items such as “Even if the benefits outweighed the risks, this AI should still be banned.” Endorsing any item flagged their stance as consequence‑insensitive — the hallmark of moral, rather than pragmatic, objection.
Opposition climbed with the perceived stakes of the technology. Only about 12% of respondents rejected chatbots, 34% rejected AI‑generated art, 41 % rejected virtual companions, and 31 % objected to parole algorithms. And inside each of these groups expressing opposition, between 64-90 % of them wanted a blanket ban — even if the system could be proven safe or beneficial. The low end of that range appeared among chatbot skeptics; the high end (up to 90 percent) for algorithmic parole.
A single latent “AI moralization” factor explained these judgments across domains, and was strongest among people who already felt uneasy about AI in general, had little firsthand experience with AI, or (in the companion scenario) held strong purity norms and discomfort with “playing God.”
Why moral objections behave differently
In moral‑psychology terms, a protected value is sacred: we do not price it, we do not trade it, and we punish those who try. When opponents tell researchers, “Ban it no matter the benefits,” they signal that the usual levers — accuracy metrics, cost savings, glossy demos — will fall flat. These objections originate in visceral intuition and only later get wrapped in post‑hoc rationales, which is why data alone are rarely persuasive enough.
So, how can behavioral design turn moral roadblocks into more informed, values‑aligned conversations?
Behavioral design implications for leaders
Lead with values, not velocity. Tell a trust story before a speed story. Spell out how human judgment stays in the loop, how consent is respected, and which guardrails prevent worst‑case harms.
Ditch the threats. Instead of “AI‑first or you’re obsolete” memos, open with how the system protects dignity, fairness, and user control. Moral objections are rarely swayed by ultimatums; they soften when people see their core values reflected back.
Make the strange familiar. Start with low‑stakes pilots, sandbox modes, or guided walkthroughs so people can explore new technologies safely. Direct experience shrinks the moral gulf between “that alien algorithm” and “this tool I tried myself.”
Be radically transparent. Publish plain‑language model cards, disclose training data in broad strokes, and document how errors are detected and corrected. Openness signals respect and addresses hidden‑process anxieties that often fuel moralization.
Mind the spillover effect. One ethical breach (say, an art generator caught plagiarizing) can contaminate perceptions of every AI product under your brand. Unified governance, rapid response plans, and visible accountability keep a local failure from becoming a global reputational crisis.
Moral opposition is common, sticky, and largely immune to efficiency arguments. But it isn’t immovable. When product leaders start with shared values, demystify the tech through firsthand experience, practice radical transparency, and remain mindful of reputational spillover, they can turn a knee‑jerk “never” into an informed “maybe” — or at least into a conversation grounded in both facts and values.
Got questions about AI adoption (or resistance to it?). Let's talk!
Reference
[1] Oldemburgo de Mello, V., Ayad, R., Côté, É., Inbar, Y., Plaks, J., & Inzlicht, M. (2025). The moralization of artificial intelligence. PNAS.