Back to homelatest news

After Orthogonality: Virtue-Ethical Agency and AI Alignment

The Gradient Peli Grietzer February 18, 2026
After Orthogonality: Virtue-Ethical Agency and AI Alignment

Preface This essay argues that rational people don’t have goals, and that rational AIs shouldn’t have goals. Human actions are rational not because we direct them at some final ‘goals,’ but because we align actions to practices[1]: networks of actions, action-dispositions

Read full article on The Gradient

Get new guides every week

Real AI income strategies, tool reviews, and plain-English news — free in your inbox.