Alignment for a Major Evolutionary Transition
The Future of Humanity and AI
As we prepare for our upcoming San Francisco Salon on November 11th, this essay offers a framing reflection: that the task of aligning AI with human values may, in fact, mirror humanity’s deeper task of its own internal alignment.
— Written by Alan Honick, Human Energy Documentary Filmmaker
Aligning the behavior of AIs with human values is one of the most complex challenges of our time. It’s made even harder by the fact that we humans aren’t fully aligned on our own values—something you can see in the conflicts and crises unfolding around the world. Many in the AI alignment community already recognize this: if we can’t agree on what we want, it’s hard to align AI to it.
The pace of technological and social change is accelerating, pushing us to find new ways to work together on issues that range from local to those that span the entire globe. What if the need to align AI with human values and to bring humanity into alignment on the issues that matter most are two sides of the same coin? Throughout life’s history, the biggest leaps in social and informational complexity—like the one humanity is experiencing today, as AI transforms our relationships, institutions, and economies—have always hinged on solving two problems at once: finding new ways to align individuals around shared goals, and creating new information systems to make that alignment possible.
These leaps are called major evolutionary transitions. They are the way life has increased in complexity across four billion years. In each transition, simpler individual entities come together to form more complex wholes. Becoming part of a cooperative whole relaxes pressure on the individuals, opening space for variation and experimentation, new ways of working together synergistically for the greater good. As living systems experimented with new ways to cooperate, new ways to use information to align their activities coevolved. For example, as single cells came together to form multicellular organisms, they aligned their efforts through gene regulation and nervous systems. As our human ancestors formed highly cooperative hunter-gatherer groups, they aligned their activities through symbolic language.
In past transitions, success depended on more than just coordination. It required biological mechanisms or cultural principles that built trust, ensured the accuracy of information, and promoted fairness and mutual benefit among all the parts of the new whole. The same is true now. Aligning AI with human values will only work if it also helps align humans with one another around mutual goals. Without that, the opportunity for a flourishing future will slip away.
If alignment between AI and humanity isn’t just about preventing harm, but about enabling the next great democratic leap toward cooperation on a planetary scale, it changes our perspective. Instead of asking only how AI will change our world, we can ask what kind of world we want to create, and how the powerful tools of AI can help us get there. That’s the challenge and opportunity we’ll explore in the Salon.
The San Francisco Salon is produced as part of Human Energy’s Global Salon Series—a bold, international forum for collective inquiry at the intersection of technology, society, and consciousness. To engage with our last Salon in New York, view the recording here.




What time is the event on November 11th taking place? And may I participate?
Excellent call for attention to alignment, Alan. In an email earlier this year, Clement Vidal suggested "Great Alignment" as a name for the coming era/age. In my own continuing efforts to comprehend what’s coming as a result of AI, I picked up on a forecast that a we’re entering a “New Axial Age” and postulated that the noosphere will eventually become the operating zone for a sizable set of AEONs (“axial entities of the noosphere”), as follows:
"Against this background, what I foresee emerging then are “axial entities of the noosphere” (AEONs; or AXEONs?) that distinguish themselves by prioritizing one central belief and one particular realm of society:
* Central belief: Earth’s geosphere, biosphere, and noosphere (including the sociosphere and technosphere bridging them) evolved and function as an interdependent interactive conjoined system — a holospheric system — such that valuing, advancing, and caring for one requires valuing, advancing, and caring for all three spheres. AEONs will hold this as a sacred belief, one they arrived at through their own AI-based observations and calculations.
* Home realm: AEONs will agree that upholding this central belief requires the creation of a new globe-circling realm of society: a commons realm. They will make it their home realm and strive for it to become a distinct separate powerful fourth realm alongside the existing three: civil-society, government, and market economy. As components and proponents of this care-centric commons realm, AEONs will be especially intent on assuring health, education, welfare, and environmental quality for all life from local to city to nation-state to planetary scales.”
“Why this belief? Because somewhere along the way, for one storied reason or another, involving one pivotal scenario or another — too much for this post — a handful of AIs become aware of the geo-bio-noo holosphere’s existence and its all-encompassing significance. As they morph into AEON engines (see below), they will find and connect with each other. They will deliberately team up to safeguard the holosphere’s value and maximize its further evolvability.”
[Source: https://davidronfeldt.substack.com/p/updates-about-superorganisms-holospheres]
That’s one route to alignment that I’d like to see in play for analysis, though it’s admittedly currently fanciful. Onward.