top of page

How To Use AI To Accelerate Impact Without Losing Credibility As A Professional Executive: The Future of Work Is Here!

  • Writer: Nick Jankel
    Nick Jankel
  • 3 days ago
  • 7 min read

Updated: 12 minutes ago

AI Failures in Law and Science Reveal a Fragile Future of [Professional] Work


As an AI+ Leadership keynote speaker, I have long been tracking the use of AI by professionals and executives. Across the United States, the justice system is becoming an unexpected testing ground for the benefits and many risks of generative AI.


In California, a prosecutor’s office filed a criminal motion with inaccurate citations generated by AI. The references looked real but were entirely fabricated. When questioned, the office admitted an attorney had used AI to draft the document.


In Alabama, two lawyers defending the state’s prison system were sanctioned by a federal judge for submitting filings containing fake cases invented by ChatGPT. And in Washington, D.C., lawyers are citing “cases” that do not exist and “precedents” that collapse on inspection, all pulled from AI systems that make things up with extraordinary confidence.


ree

These are not isolated errors. A recent article in Nature reinforces the issue in science. Many peer reviews of proposed scientific papers (about AI, deliciously ironically) seemed to miss "the point of the papers" they were meant to review. In one case, a totally AI-generated review suggests that a research paper was "on the borderline between accept and reject."


This means AI hallucinations were impacting our shared human knowledge base... about AI!


Why Leaders Cannot Outsource Thinking to Generative AI


These issues go beyond AI model collapse to herald the potential for societal model collapse. AI model collapse is a degenerative process in which generative AI models become less accurate and diverse over time, as they are trained on their own AI-generated data. Imagine what happens to society when the knowledge that professionals and executives base their decisions on is itself undermined by AI-generated errors.


These examples, as well as the thousands that go unreported, signal that AI is unreliable and even dangerous without human expertise from those with non-AI-generated domain knowledge.


As these examples make clear, we're seeing an emerging crisis of professional responsibility, ethics, and legal/fiduciary liability. Perhaps this is the main reason The Economist recently published survey data showing flatlining, and even declining, business adoption of AI.


This is a profound warning for every leader embracing AI. As I often say in my AI and/or Leadership keynotes:

Leaders should not use Generative AI without Pre-Existing domain knowledge, fully activated critical thinking, and detailed fact checking of every citation, source, and exAmple.

The Future of Work Demands Leaders Who Can Think Beyond the Machine TO Accelerate Impact, Not DEGENERATE Professional Credibility & DUTY OF CAre


Anyone who uses AI as a professional without domain expertise, critical thinking, and endless rigor is almost certainly going to have errors, often serious, in what they produce. At the least-worst end, you create slop, which research shows reduces trust, damages personal and company brand authority, and forces teams into rework.


At the worst end, as these legal cases show, you create legally liable errors, ethical violations, and costly mistakes that have a real impact on lives and livelihoods. Generative AI models do not understand the law, medicine, engineering, finance, marketing, or psychology. They simply predict text that looks plausible. That is not the same as human intelligence. It is linguistic pattern-filling.


Unless you know the domain deeply and take the time to question/check everything (which could be more time-consuming than writing by hand), you cannot ensure accuracy over hallucination.


This is why I argue so strongly in my book Now Lead The Change: Repurpose Your Career, Future-Proof Your Organization, and Regenerate Our Crisis-Hit World by Mastering Transformational Leadership, that AI-ready leadership requires a fusion of human discernment, domain expertise, and critical thinking with the rapidly improving capabilities of Artificial Intelligence.


This is the heart of my concept of LEADERSHIP–AI Synthesis.


Real-World Experiments: When Generative AI Accelerates and When It Derails Professional Work


Recently, I used AI to help draft a very important professional document that impacts many lives, for which I have some but not total domain expertise. The model helped me draft a structure, argument, and a list of citations based on my detailed briefing.


Then came the real work:

  • I gave the draft to another professional, with different domain expertise in the area I was lacking, who did their own due diligence.

  • I checked every single citation myself.

  • I verified whether each paper actually existed, whether titles and authors were correct, and whether the AI’s interpretation matched the actual argument of the paper.

  • This meant quick reading of around 20 scientific journal papers.

  • I rewrote the document based on this work.


It was still faster than writing everything manually, and the outcome was richer, because the AI produced suggestions that I could then rigorously interrogate and ideas that I may not have come to without it. But the critical point is this: AI did not reduce my responsibility to think. It increased it.


Recently, I ran another revealing experiment: I asked an LLM to describe the controversies and ethical issues associated with a particular technology company. The model responded with an anodyne, generic, and frankly disingenuous paragraph that glossed over some major societal risks.


Because I already had some domain knowledge of the company’s behaviour and the controversies surrounding it, I challenged the model. It took four or five increasingly precise responses before the LLM finally surfaced the real details of the issues I was referring to. Without pre-existing, non-AI-generated knowledge, gleaned in this case from the highest standards of journalistic reporting in the "mainstream media," I would have walked away from the interaction thinking something fundamentally wrong.


The company in question "happens" to be funded by the same major backer that invested in the LLM I used.


This illuminated something deeply concerning: even when the underlying facts are publicly available, generative AI will often default to sanitised, risk-averse answers unless pushed with expert knowledge. Who knows whether, given the lack of corporate transparency and the use of black-box models, it was influenced by the ownership and leadership structures of the two companies...


This is both a validity problem and an ethical one, because if you don’t already know the truth, you will never know what the model is hiding, softening, or omitting, and the implications for real-world problems as a result.


Oppositional AIs and Multi-Model Workflows: A Future-of-Work Skillset for High-Stakes Professions


Generative AI remains riddled with inaccuracies, confidently delivered as facts.

These hallucinations may be structurally unavoidable, a corollary of how large language models work. Hallucinations might be an irreducible feature of generative models based on LLMs.


No matter how advanced the system, text-predicting engines will always generate plausible falsehoods unless paired with real-world knowledge, verification systems, and expert oversight. There may never be a version of AI that “just gives the truth” by default.


So the human must remain the arbiter of accuracy, ethics, and insight, especially in high-stakes sectors like law, healthcare, finance, engineering, education, and leadership of all kinds.


To help me maintain professional validity and credibility while using AI to accelerate and amplify my work, I have begun to use my own multi-AI setup to pit them against each other, with different instructions. I am using each foundational model's strengths to overcome the weaknesses of the other LLMs.


This idea is based on how the best thinkers and leaders work, how scientists review each other's data and arguments, and on GAN (Generative Adversarial Network) setups in machine learning.


Writing my new book on speaking and storytelling for leaders—working title: Speak Electric | Lead Magnetic: The Art & Craft of Transformational Speaking To Lead Change From Any Stage—I used an already finely-tuned LLM to help synthesize a book built from:

  • Verbatim recordings of each book chapter based on my domain expertise

  • My previous writings, both published and unpublished

  • 200+ journal papers and articles I’ve collected over the years on storytelling and keynote speaking


The process took me 4 weeks to get to a first draft. This usually takes me a year or so to get to. In my second draft, I double-checked quotes, sources, and claims. Then I put that full draft into a second LLM and instructed it to act like a New Yorker-level fact checker. The adversarial LLM flagged 20+ dubious “facts” that I now need to investigate via old-fashioned deep-dive research.


This is why I advocate for multi-model LLM workflows led and guided by a human expert.

  • Model A transcribes or extracts.

  • Model B creates or synthesizes.

  • Model C critiques or challenges.


This triangulation, using each LLM in a specific way, is one way humans and machines can build trustworthy synthetic intelligence.


The Protocol for AI-Ready Professionals IN The Future Of Work


AI is not prompt-and-forget. It is prompt-and-think.

  • Every claim must be checked.

  • Every citation must be cross-verified.

  • Every response must be interrogated.

  • Every assumption must be challenged.


We must never assume AI is right, especially when the stakes are legal, medical, educational, financial, or reputational. This requires both rigorous critical thinking and advanced conceptual thinking on the part of leaders and professionals.


This is also why leaders must carefully choose when to use AI. If you have domain expertise and time for verification, AI can be an accelerant. If you lack domain expertise or cannot verify rigorously, AI can slow you down or put you in danger.


AI accelerates thinking when deployed with skill. It accelerates slop when deployed without it. This is the protocol I speak about and guide my clients to adopt.


  1. Expert questions.

  2. AI proposes/generates.

  3. Expert shapes.

  4. AI suggests new connections.

  5. Expert verifies.

  6. Different AI checks.

  7. Expert polishes.



The Future of Work Belongs to Professionals Who Can Integrate Human and Artificial Intelligence: Six Rules for AI-Ready Leaders


  1. Never publish, file, or present AI-generated content without expert verification.

  2. If you lack domain expertise, upskill yourself using traditional, trustworthy sources or hire a domain expert to do so.

  3. Use a second or third LLM as an adversarial guard model and fact checker.

  4. Continuously refine and evolve your instructions for each model so you can better guard against its specific foibles and flaws.

  5. Once content has been improved and approved by a human, be very careful about going backwards by putting it back in an LLM; otherwise, the hard work of critical thinking and fact-checking needs to be redone from scratch every single time.

  6. Be uber-careful with version control. Always know which is the last human-approved version and ensure everyone else does too. Do not take content from an AI thread or output unless you want to do the rigorous rewrite all over again.


This is one way leaders can integrate human intelligence with artificial intelligence in the Future of Work realized today: responsibly, strategically, smartly, and with integrity. If your organization wants to build AI-ready leadership capabilities, check out my work here; or if you want a powerful AI keynote speaker or a leadership keynote speaker, get in touch with my team.

Tags:

 
 
 

Comments


bottom of page