Why AI Risk Is a Communications Problem Now – with Alec Crawford

Share the goodness!

You don’t roll out AI. It’s not an ERP system. And yet most organizations are still treating it like a procurement decision, reaching for governance frameworks and compliance checklists as if the real challenge is containment rather than comprehension.

In this episode of The Trending Communicator, host Dan Nestle sits down with Alec Crawford, founder and CEO of Artificial Intelligence Risk Incorporated and host of the AI Risk Reward podcast. Alec built neural networks at Harvard in 1987, spent 30 years on Wall Street as a risk officer managing hundreds of billions, and now builds AI governance, risk, compliance, and cybersecurity platforms.

Alec and Dan dig into what risk actually looks like when every employee has access to intelligence, why the crisis playbook most companies rely on is already obsolete, and how deepfake threats and shadow AI are reshaping the landscape for communicators and executives alike.

Listen in and hear about…

  • Why domain expertise becomes more valuable, not less, in an AI-enabled organization
  • The context window problem and what it means for accuracy and trust
  • Shadow AI, jailbreak attacks, and the real cybersecurity threats facing companies
  • How deepfakes are already being weaponized against executives and brands
  • Why communicators need to become AI risk experts, not just AI users

Notable Quotes from Alec Crawford

“Once you understand that, aha, light bulb moment, you can’t ask ChatGPT to check its work. Typically, you might go to another model and say, hey, here’s what ChatGPT said. What do you think? But you can’t ask a model to check its own work.”

“The way to fix shadow AI is to give people at your company great AI that’s better than they could get at home. The best models, connected to your corporate data, connected to your email, connected to anything you could dream you want to be connected to. Why would I now go use some other AI if I’ve got access to that?”

“At this point they probably need somewhere between five and 10 seconds of a video like this to create a deep fake where they can say whatever they want and make you say whatever they want.”

Resources and Links

Dan Nestle

Alec Crawford

Timestamps

0:00:00 Opening & Introduction to Alec Crawford, AI Transformation Mistakes
0:07:13 AI Hallucinations, Research Pitfalls, and Due Diligence
0:12:35 Importance of Prompting, Domain Expertise & AI Iteration
0:18:32 Alec’s Journey: From Building Neural Nets to Institutional AI
0:24:18 Limits of Large Language Models, Explainability, and AI Sentience
0:29:04 Context Windows, Memory Limits, and AI Conversation Pitfalls
0:35:42 AI Safety, Prompt Injection, and Corporate Guardrails
0:41:09 Beating Shadow AI: Corporate AI Environments & User Adoption
0:45:55 AI Agents, Agentic Workflow, and Financial Services Applications
0:53:26 Crisis Communication in the AI Era: Risks & Recommendations
1:01:30 Future of AI Models, Deepfakes, and Validation Technology
1:05:39 Alec’s Book Announcement & Closing Remarks

(Notes co-created by Human Dan, Claude, and Castmagic)

Learn more about your ad choices. Visit megaphone.fm/adchoices