top of page

From Citrini to Citadel & Everything In Between: Forging or Failing the Great AI Transition

  • Writer: Nick Jankel
    Nick Jankel
  • Mar 2
  • 8 min read

Updated: 4 days ago

DO NOT UNDERESTIMATE The Scale of the Business Moment


We are not simply experiencing another wave of digital transformation. We are living through the early stages of the Great AI Transition, a structural reconfiguration of work, value, authority, truth, relationality, identity—and therefore society.


This transition is only just beginning.


Jobs will disappear. New jobs will emerge. Entire categories of expertise will compress, fragment, or evolve. Sectors will collapse. New economies of value will be generated, whether through the 1-employee Unicorns or through networks of vibe-coded pathways and integrations.


The pace is way faster than previous industrial shifts. The psychological disruption is far deeper.


This is not another digital transformation cycle. Digital transformation optimizes processes.

The Great AI Transition will reshape our operating systems.


Digital tools increase efficiency. The Great AI Transition reshapes how decisions are made, how knowledge is created, and how authority is distributed.


The Great AI Transition transforms identity, not just operations. That's why it feels so destabilizing.

Yet there are ways to find coherence and clarity, amidst the chaos and complexity, if we are willing to pause, reflect, and make sense of things... and lean into what makes our consciousness so human.


The Two Warring Camps With 2 Very Different Narratives of the Transition


The economic debate around artificial intelligence is crystallizing into two distinct camps. Understanding both is essential for leaders navigating the Great AI Transition. We could call it the Citrini vs. Citadel Wars.


CAMP CITRINI: The Collapse Thesis


Camp Citrini argues that generative AI represents not just automation but also the large-scale displacement of cognitive labor. Their concern is not incremental productivity gains.

It is a structural labor market disruption that then undermines real estate markets, tax bases, and society as a whole.


As they write in their futuristic paper: "This is the first time in history the most productive asset in the economy has produced fewer, not more, jobs. Nobody’s framework fits, because none were designed for a world where the scarce input became abundant. So we have to make new frameworks. Whether we build them in time is the only question that matters.


The argument runs roughly as follows:

  1. Generative/agentic AI substitutes directly for high-skill knowledge work.

  2. Labor income declines significantly as firms automate white-collar roles (such as the 40% heacdount recduction by Jack Dorsey).

  3. Lower aggregate wage income reduces consumer demand.

  4. Demand contraction leads to deflationary pressure and slower economic growth.


Unlike prior technological revolutions that primarily displaced manual labor, AI targets:

  • Legal analysis and drafting

  • Construction and logistics

  • Software development

  • Financial analysis

  • Marketing content creation

  • Customer service

  • Even elements of medical and advisory work


If a significant percentage of cognitive work becomes automatable, then labor’s share of income could decline more rapidly than capital’s ability to redistribute gains through spending (or government is able to retrain and recalibrate the workforce).


Related arguments can be found in discussions about “technological unemployment” dating back to John Maynard Keynes, and more recently in debates on automation and inequality, such as those explored by MIT economists Daron Acemoglu and Simon Johnson in Power and Progress (2023).


The deeper fear within Camp Citrini is not temporary disruption. It is a persistent structural imbalance:

  • AI-driven productivity gains accrue primarily to capital owners.

  • Labor markets cannot reabsorb displaced workers fast enough.

  • Aggregate demand weakens.

  • Political instability rises.


Western communities know what this is like, as it happened recently with the China Shock of the 2000s to 2020s. Nations are paying the price as we speak. The loss of meaningful work and middle-class lifestyles has impacted everyone, and it's been, and still is, for most, painful.


At its strongest, the collapse thesis holds that, unless wealth redistribution, reskilling, and systemic reorganization mindsets and mechanisms evolve rapidly, AI could lead to prolonged economic stagnation, rising inequality, and socio-political chaos.


Marx would be proven right: capitalism would have created the conditions for its own collapse.


CAMP CItADEL: The Growth Thesis


Camp Citadel takes a different macroeconomic lens. Their argument is that AI is best understood as a classic positive productivity shock. As Frank Flight puts it: “Productivity shocks are positive supply shocks: they lower marginal costs, expand potential output, and increase real income.”


This perspective aligns with long-standing economic models of supply-driven growth. Historically, from the steam engine to electrification to the internet, productivity-enhancing technologies have followed a consistent pattern:


Lower marginal costs→ Lower prices→ Increased real purchasing power→ Expanded consumption→ Higher investment→ New industry formation


This is consistent with standard macroeconomic theory of positive supply shocks, as described in mainstream economic literature and textbooks, and supported by long-run data from prior industrial revolutions.


For the collapse scenario to materialize, two extreme assumptions would have to hold:

  1. Labor income collapses entirely.

  2. Capital income has zero spending velocity.


Both are historically implausible. Even when technologies displace workers in specific sectors, profits are typically:

  • Reinvested

  • Distributed as dividends

  • Taxed and redistributed

  • Spent by capital owners


Moreover, Camp Citadel emphasizes that AI is more likely to be a complement to human labor than a strict substitute. The modern economy contains vast domains resistant to full automation:

  • Physical coordination

  • Supervisory oversight

  • Legal liability environments

  • Relational negotiation

  • Ethical and contextual judgment

  • Leadership!


Complementarity between capital and labor has historically been a dominant feature of technological progress. Technology usually reshapes skill demand rather than eliminates labor wholesale.


Camp Citadel does not deny disruption. It argues that as AI increases productive capacity, new industries and roles emerge, and aggregate output ultimately grows. The transition may be volatile, but growth will expand, and most will gain.


The Weird New World


The reality is that it won't be like either camp, Citrini or Citadel, portrays. As a professional futurist keynote speaker—who has run countless scenario-planning projects for multinationals and government institutions—I know that no single scenario ever emerges in its entirety.


Each scenario is likely to emerge at different times and places, depending on a multitude of factors in the complex adaptive system that is human society. What is vital is everything in between: the missing and messy middle between easy-to-state, overly mechanistic, and clickbait-optimized grand narratives.


The complexity of our reality evades neat partisan predictions—abundance or collapse—and simple silver bullet solutions to either polarity. We simply cannot imagine how complexity will generate more complexity, and how innovations will generate more innovations as the Great AI Transition gathers speed.

As one pundit wrote in the WaPo: "Remember... how limited our imaginations are in the face of a true technological revolution: Neither 18th-century artisans nor their industrial rivals could have deduced the five-day workweek, the interstate highway or the rise of mass higher education from the operations of a primitive textile mill.


Whatever is coming, it will almost certainly be weirder and more surprising than any doom-filled prophecy or utopian fantasy you’ll read today."


What is often forgotten in future-scoping, sense-making, and foresight is that whichever scenario(s) we end up with will depend on our choices as leaders.



Yes, disruption will occur. It already is. Many roles will compress dramatically. A few already are. Some sectors will be wiped out, and the pain will be immense. And the pace really is unprecedented.


But history suggests that AI, and any new technology, can expand what is possible rather than collapse it... assuming we choose to adapt, grow, and expand our palette of responses within our consciousness.


The Great AI Transition will not be smooth. If you want some tips on how to lead your teams and yourself through AI uncertainty and anxiety, deep dive here.


Yet the Great AI Transition is unlikely to produce permanent collapse. The macro outcome will depend heavily on micro-level decisions made by people like you and I:

  • How organizations deploy AI (my firm, SOL, has already pivoted from one AI build to another in months as agentic AI capabilities expand so rapidly).

  • Whether leaders design for human augmentation or substitution (which is not just a moral choice but one that should rest on outstanding customer service).

  • How quickly new skills and adaptive capabilities are cultivated (and whether we invest heavily in new skill and mindset development).

  • How trust, cohesion, and coherence are maintained in complexity (and whether we can hold the center as the weirdness gets weirder).


Trust In The Process of Transformation: The Expanding Adjacent Possible


Stuart Kaufman, a complexity scientist and biologist, has shown in his work that new fields of possibility open when we adapt, evolve, and create through innovation and adaptation. As some invent the printing press, others invent medieval traveling book seller jobs. Some invent online bookstores. Each creation opens up what he calls a new “adjacent possible” for others to create into.


Every technological leap expands the adjacent possible, the set of new opportunities made available by current innovations and adaptations. The steam engine did not eliminate jobs in the aggregate. It enabled railways, global trade, and urban design. The internet did not eliminate work. It enabled new industries, new ways of relating, and new business models.


An expanded adjacent possible emerges as each complex living system evolves, develops,

and creates. Each field of possibility we unfold provides us with new choices to create,

invent, and adapt. Through making use of those new choices, we open up new fields of

possibility. It goes on and on, probably infinitely (until the heat death of the solar system, perhaps).


AI is expanding the adjacent possible at an unprecedented velocity. We are already seeing the early emergence of new roles in:

  • Human–AI workflow architecture

  • AI ethics and governance

  • Algorithmic oversight

  • Synthetic data design

  • Strategic foresight modeling

  • Trust and risk integration

  • AI keynote speakers!


Life begets life. Innovation begets innovation. It all depends on what happens within our human consciousness, not so much what happens in artificial or alternative intelligence.


The Great AI Transition Is Psychological Before It Is Technical


The great challenge is that adjacent possibilities are widening faster than institutions, nations, and individuals can absorb. Leadership is the bottleneck. The deepest disruption is not technological. It is existential.


What happens to my expertise if AI analyzes faster than I can? What becomes uniquely human? What does authority mean when knowledge is commoditized?


Technical skills now have shrinking half-lives. Many who are not able to adapt their skills, mindsets, and behaviors will be left behind.


As I have written about in this piece, There Are No Quick Fixes For Adapting To AI Disruption, traditional training alone will fail. What is needed are protocols for leadership development—and the transformation of leadership consciousness—that support us in unlocking the adjacent possible for ourselves and our organizations.


The AI transition demands that workers, especially leaders, build:

  • Cognitive flexibility

  • Emotional regulation

  • Ethical maturity

  • Learning agility

  • Psychological safety

  • Ethical clarity

  • Strategic imagination

  • Identity adaptability

  • Constant creativity

  • Relational reciprocity

  • Collective coherence


The organizations that succeed will not be those with the best tools. They will be those with the most adaptive leaders and cultures.


As one executive at SAP, Caroline Hanke, insightfully put it: "I truly believe agility and openness to change—people that can cope with change and adapt quickly—those will be the central skills I want my teenager to have. The technical skills relevant for today are not going to be relevant even two years from now. It really is more soft skills—critical thinking, adaptability. Also, ethics—where human judgment comes into play.”


During the Great AI Transition, value will shift upward toward the higher functions of human consciousness. Toward judgment.Toward discernment. Towards insight. Toward conceptual thinking. Toward relational trust. Toward meaning-making. Toward executive presence. Towards co-creativity. Toward breakthrough innovation.


If you want to go deeper on how to collaborate with AI, explore my latest thinking on the AI-LEADERSHIP Synthesis.


Will You Fail Or Forge the AI Future?


We all must navigate The Great AI Transition. Some of us will also choose to lead it. A few are already choosing to expand the adjacent possible through proactive use of our imagination, innovation, adaptation, and transformation.


We will transform outdated beliefs and habits and build more adaptive ones. We will invent solutions to unprecedented challenges, using AI as our companions. We will build new human capabilities with our plastic brains and fluid intelligences.


Nothing can stop life from creating more life except ourselves. This is the core truth of humankind that keeps me hopeful, inspired, and engaged even amidst the complexity, chaos, and constant crises of this Weird New World.


We have to get out of the way of our own innate aliveness—and all the creativity and adaptability that comes with it—if we want to shape a future worthy of our highest aspirations.


We are organisms, not algorithms—and that's what makes us different from the machines: infinitely adaptable, endlessly hopeful, and tirelessly creative.

The Great AI Transition is already underway. Pain and problems during this period are inevitable. Yet suffering or thriving in the transition are choices we all must make.


We can either let the future happen to us or play a role in shaping it. This has always been so.


What is your choice, today?

Comments


bottom of page