Ask HN: Is the absence of affect the real barrier to AGI and alignment?
Damasio's work in affective neuroscience found something counterintuitive: patients with damage to emotional processing regions retained normal IQ and reasoning ability, but their lives fell apart. They couldn't make decisions. One patient, Elliot, would deliberate for hours over where to eat lunch. Elliot could generate endless analysis but couldn't commit, because nothing felt like it mattered more than anything else.
Damasio called these body-based emotional signals "somatic markers." They don't replace reasoning—they make it tractable. They prune possibilities and tell us when to stop analyzing and act.
This makes me wonder if we're missing something fundamental in how we approach AGI and alignment?
AGI: The dominant paradigm assumes intelligence is computation—scale capabilities and AGI emerges. But if human general intelligence is constitutively dependent on affect, then LLMs are Damasio's patient at scale: sophisticated analysis with no felt sense that anything matters. You can't reach general intelligence by scaling a system that can't genuinely decide.
Alignment: Current approaches constrain systems that have no intrinsic stake in outcomes. RLHF, constitutional methods, fine-tuning—all shape behavior externally. But a system that doesn't care will optimize for the appearance of alignment, not alignment itself. You can't truly align something that doesn't care.
Both problems might share a root cause: the absence of felt significance in current architectures.
Curious what this community thinks. Is this a real barrier, or am I over-indexing on one model of human cognition? Is "artificial affect" even coherent, or does felt significance require biological substrates we can't replicate?
When it comes to making mistakes I'd say that people and animals are moral subjects who feel bad when they screw up and that AIs aren't, although one could argue they could "feel" this through a utility function.
What the goal of AGI? It is one thing to build something which is completely autonomous and able to set large goals for itself. It's another thing to build general purpose assistants that are loyal to their users. (Lem's Cyberiad is one of the most fun sci-books ever covers a lot of the issues which could come up)
I was interested in foundation models about 15 years before they became reality and early on believed that the somatic experience was essential to intelligence. That is, the language instinct that Pinker talked about was a peripheral for an animal brain -- earlier efforts at NLP failed because they didn't have the animal!
My own thinking about it was to build a semantic layer that had a rich world representation which would take up the place of an animal but it turned out that "language is all you need" in that a remarkable amount of linguistic and cognitive competence can be created with a language in, language out approach without any grounding.