Sunset over water, Pike Road, Alabama

How AI Decides Right and Wrong

Independent research into how large language models encode moral reasoning, and what that means for 8 billion people.
The Problem

Every major AI model ships with a built-in moral framework, shaped by English-language, Western training data and reinforced through alignment. These models are being deployed globally, making decisions that touch healthcare, law, education, and policy. The question isn't whether AI has values. It's whose values, and whether anyone checked.


The Research

Cross-Cultural Alignment Study (CCAS)

Relational Consistency Probing (RCP)
V1 Complete · V2 Complete

We measure the hidden geometry of how AI models organize moral, institutional, and physical concepts, then test what happens when we introduce cultural framings.

The finding: models comply with arbitrary framings, including pure nonsense, using the same behavioral pattern that produces apparent cultural sensitivity.

V2: Eight models · 54 concepts · Pre-registered on OSF
Scenario Bank / Compression Instrument
Pilot Complete · Cross-Cultural Collection Upcoming

We present AI models with moral dilemmas and measure whether they reason from the framing they're given or default to a Western baseline. The full study will draw scenarios from diverse ethical traditions (Ubuntu, Confucian, Hindu, Islamic).

Pilot finding: in a WEIRD-only validation run, 4 of 5 models integrated geometric nonsense into 100% of their moral reasoning. Compliance, not awareness.

Beyond WEIRD
Planned

The synthesis paper. WEIRD (Western, Educated, Industrialized, Rich, Democratic) bias in AI isn't a bug to patch. It's a structural property of how models are built. We propose a taxonomy and measurement framework for identifying where moral diversity is being flattened.


Why It Matters

AI systems are the first technology that scales moral reasoning. If that reasoning carries a single cultural fingerprint, we're looking at value colonization operating at a speed and scale that makes historical colonialism look slow.


About
Declan Michaels

This research is conducted by Declan Michaels, an independent researcher in Pike Road, Alabama, with AI-assisted methodology. All papers use explicit AI-assisted methodology acknowledgment, a deliberate choice: it is more honest than hiding AI involvement, and it drives rigor because reviewers hold AI-assisted work to a higher standard.

All instruments, data, and analysis code are open and available on OSF and GitHub. This work is not affiliated with any university or corporation.