AI expert warns current systems lack moral agency
Ben Goertzel, the AI researcher known for his work on artificial general intelligence, recently shared some pretty sobering thoughts about where we stand with AI development. In an interview, he made it clear that today’s AI systems are still just tools—powerful ones, sure, but tools nonetheless. They’re brittle, manipulable, and far from being moral actors.
He thinks AI only becomes a moral actor when it starts making decisions based on an actual understanding of right and wrong, not just following programmed instructions. You’d need to see persistent internal goals, learning driven by its own experience, novel creation that reflects a point of view—that sort of thing. Until then, we’re dealing with sophisticated tools with guardrails, not autonomous minds.
Training practices shape future behavior
Goertzel’s really concerned about how we’re training AI today. He thinks the way we build these systems now will fundamentally shape how they behave tomorrow. If models are trained on biased or narrow data, or in closed systems where only a few people make decisions, that could lock in existing inequalities and harmful power structures.
“To prevent this,” he says, “we need more transparency, wider oversight, and clear ethical guidance right from the start.” It’s not something we can fix later, he suggests. The foundations matter.
Democratic governance remains elusive
Perhaps the most striking part of the interview was Goertzel’s assessment of democratic AI governance. He called it “more of a fragile ideal than a current reality.” In a perfect world, he imagines we could collectively weigh the huge trade-offs—curing disease, solving hunger against the risks of AI acting unpredictably. But given today’s geopolitical fragmentation, that level of coordination seems unlikely.
Still, he thinks we can approximate it. Building AI with compassion and using decentralized, participatory models like Linux or the open internet could embed some democratic values even without a world government. It won’t be perfect, but it’s a practical step toward safer, collectively guided AI.
Responsibility and moral consideration
Goertzel agrees with critics like Jaron Lanier who argue that society can’t function if we hand responsibility over to machines. At the same time, he believes we can move toward more autonomous, decentralized AGI if we build it with the right foundations. Systems need to be transparent, participatory, and guided by ethical principles so that even as they act independently, humans are still overseeing and shaping their behavior.
He makes an interesting point about moral understanding too. You don’t hard-code morality as a list of rules—that just freezes one culture and one moment in time. Instead, you build systems that can become genuinely self-organizing, that learn from experience, consequences, and interaction.
Looking ahead 10 to 20 years, Goertzel thinks success would look like living alongside systems that are more capable than us in many domains, yet integrated into society with care, humility, and mutual respect. Failure, on the other hand, would look like AGI concentrated in closed systems, driven by narrow incentives, or treated only as a controllable object until it becomes something we fear.
It’s a nuanced perspective, I think. Goertzel isn’t calling for us to stop AI development, but rather to approach it differently. He wants systems that can develop their own understanding from their own trajectory in the world, not just recombine what they were fed. That’s the difference, he says, between a tool with guardrails and a partner that can actually learn why harm matters.
