The Real Race For AI And The Role Of Connected Leaders

As AI becomes good enough to supplant countless jobs, the key question shaping the future is whether enough leaders have the consciousness to flip AI from being a tool for greed to being a tool for good. Credit: Sumali Ibnu Chamid, Alemedia.id

By Ginny Whitelaw

Originally published on Forbes.com on January 1, 2026.

We’re entering the year, 2026, where “AI goes from not being good enough to being good enough. And then the job losses start and we don’t know where they end.” So begins an interview with Stability AI founder and author of The Last Economy, Emad Mostaque. In particular, we’re moving from prompt-based AI to AI agents that can complete complicated workflows, such as building a beautiful website for less than a dollar. He warns that we have less than 900 days—and counting—to make the essential decisions that steer AI toward thriving or apocalyptic futures.

He’s not alone in broadcasting warnings. Mo Gawdat, a former leader of AI at Google, is quite certain of a future where AI is in charge. But that’s not the scariest part, according to Gawdat. The most dangerous time is now, he says, when humans are in charge of AI and how it’s being used. The cause of this danger is everywhere evident, namely the moral development with which we built the modern age and are now using to deploy AI is not up to the strength of the tool. Our modern morality is still based on separation, competition and amassing power for personal gain. Used in this way, AI becomes an exponentially more effective tool for greed.  A more conscious morality, based on connection, coherence and reciprocity, is not only possible but AI tools based on it are already emerging. In the hands of connected, coherent leaders, AI flips to being a tool for the greater good. The real race in AI that will determine the future of humanity is not between companies, or even countries, but between the emergence of AI and there being enough leaders who embody and enact connection to put it in service of good.

AI is already showing us what happens when its mind-bending ability to amplify and accelerate is used with the morality of separation and rational self-interest. It is as if we’ve pressed the fast-forward button on what happens when division and greed rule. For example, since the industrial age began, markets have been prone to consolidation, but what used to take decades can now happen in months. Wealth has always been able to accumulate in the hands of a few, but now half-a-dozen American tech bros, each of whom is worth hundreds of billions of dollars, have more wealth than more than half the country combined.  They’ve been able to buy media platforms (from Twitter/X to the Washington Post), purchase favorable legislation (such as tax cuts and no regulation) and decimate portions of the Federal government (as in Musk’s DOGE campaign). Politics have always been swayed by money. But what used to be $10,000 in a bag is now a president brazenly raking in $3 billion/year

Not only can AI scale wealth and power, it can also scale deception, creating what cognitive scientist and philosopher, John Vervaeke, calls a crisis of meaning. He makes a distinction between propositional knowledge (e.g., knowing things), which AI excels at, and human wisdom arising from lived experience (e.g., knowing through participation), which AI lacks. We draw meaning, he says, by being able to discern, given all that our senses present, what’s relevant in a given situation. But when AI algorithms are tuned for engagement and consumerism, rather than with wisdom, they can amplify mis-framing, polarization and dysfunctional meaning-making across billions of people.

In all of these ways, AI is something like the Spirit of Christmas Future in Dickens' Christmas Carol, ominously pointing toward what is to come if Scrooge continues in his ways. Terrified, Scrooge wakes up a different man in the story. In our story, we need to do the same thing. Fortunately, we have the trajectory of growth and evolution to support us: consciousness evolves toward recognition of our interconnectedness, indeed, our interbeing. We need to wake up to that truth, and embody, embed, enact and extend it through ourselves and all that we create.

Vervaeke calls these the “4Es” of cognition—embody, embed, enact and extend—which couple wisdom with knowledge and flip AI from a tool for greed to a tool for good. Unsurprisingly, they cannot be learned through an AI prompt, but only through practices such as meditation, mind-body training, deep listening, nature-based contemplation and flow-inducing practices. Such practices, which are embodied in Zen Leadership, evolve us toward the truth our whole interbeing, not as a thought or something we read, but as lived experience. Rather than using the abundance of life to serve a local self, this morality of connection, embedded, embodied and enacted, flips to using the life of this local self to serve the wholeness of who we (also) are. From this whole-Self interest, we can flip the future of AI from apocalyptic to its evolutionary purpose.

Utopian as this may sound, it’s not the first time in history that utopian visions have spurred social transformations. The 2025 BBC Reith Lectures with historian, Rutger Bregman, focus on the need for such a moral revolution now and examples from history where such revolutions have succeeded. Citing examples from abolitionists, suffragettes and weavers of the social safety net, he identifies three criteria for success: clear vision, scalable processes and persistence.

A good example of the first two criteria—clear vision and scalable processes—that make AI a force for good is emerging from World Systems Solutions. Their Phoenix manifesto lays out a clear vision, based on a holistic philosophy of interconnection, for using AI to address real-world issues, from climate emergencies to social fragmentation. Their Phoenix AI platform scales around the world as a kind of central nervous system designed, in their words, “to help humanity transition from fragmentation to regeneration, from competition to collaboration, and from short-term survival to long-term stewardship.” It puts AI in service of community empowerment, ecological awareness, transformative education and collective governance. The manifesto culminates in an invitation, a covenant for future generations.

So, how might leaders take up such an invitation? How might we use the next 900-or-so days to steer AI toward a future we actually want? For starters, we can literally or metaphorically sign onto the Phoenix charter, that is, make a commitment to AI as a tool for good, not greed. For that to be an enactment of our own embodied morality of connection, we do well to engage in practices that remind us of our whole Self-nature, so we’re not merely acting charitably, but rather caring for the wholeness that our lived experience shows us we are. Additionally, as Mostaque advises, we do well to engage with AI tools and learn what ever-more capable AI agents can do. Use of such agents will become a huge differentiator in productivity and the more they are used in the spirit of connection and reciprocity, the more they will amplify that higher order consciousness in the world and the more training they will get in functioning from it. Moreover, use of AI tools and agents makes us more aware of their capacity and more circumspect, perhaps, so that we’re not as gullible around AI deep fake videos, alarmist clickbait or sycophancy.

Finally, as Bregman suggests for a moral revolution, a good amount of persistence will be necessary. It would be unrealistic to expect rapid change among the seeming winners of today or in the competitive mindset that frames business and international relations. But as the consequences of AI-amped competition mount and existing systems collapse, change will come and those leaders enacting a connected vision can help humanity reach the other side.

We’re entering a year when AI becomes good enough to supplant countless human jobs. But the job it can’t supplant, perhaps the most urgent leadership job of all, is making enough AI a tool for the greater good.



Ginny Whitelaw is the Founder and CEO of the Institute for Zen Leadership.

Next
Next

Building a Community of Coherence, One Ripple at a Time