After Orthogonality: Virtue-Ethical Agency and AI Alignment
The Gradient Peli Grietzer February 18, 2026

Preface This essay argues that rational people don’t have goals, and that rational AIs shouldn’t have goals. Human actions are rational not because we direct them at some final ‘goals,’ but because we align actions to practices[1]: networks of actions, action-dispositions
More from Latest News
Get new guides every week
Real AI income strategies, tool reviews, and plain-English news — free in your inbox.



