In-Depth Guides

The Playbooks

Eight guides, each built from peer-reviewed studies & practitioner frameworks. HBR, McKinsey, Stanford HAI, MIT Sloan, Harvard, & the people doing the actual work. We read the papers. We pulled the citations. The frameworks here are ones you can use.

People learning and collaborating — hand-drawn illustration
Core Framework

Judgment Is the New Alpha: Why Speed Without Direction Is Expensive Chaos

Everyone's measuring speed. The research says they should be measuring something else entirely.

The Speed Trap

Every organization is chasing speed. AI promises to make everything faster — content creation, data analysis, decision-making. Here's the uncomfortable truth the research keeps confirming: 40% of AI time savings are lost to rework when human judgment isn't applied. Speed without direction is just expensive chaos.

That's the central insight of the "Judgment as the New Alpha" framework: in an era where AI can do almost anything fast, the differentiator is no longer speed — it's the quality of human judgment applied to AI outputs. The facilitators, trainers, & L&D professionals who thrive will be the ones who develop & teach this judgment.

"We're generating more but thinking less."

— Senior Executive, Quoted in Harvard Business Review

The Technology Grid vs. The Human Grid

Think of it as two complementary systems. The Technology Grid handles computation, pattern recognition, content generation, & data processing. The Human Grid handles judgment, context, ethics, empathy, & the messy, ambiguous decisions that define real organizational life. The interesting part happens at the intersection — & that intersection is exactly where facilitators operate.

McKinsey's research confirms this: 70% of AI value comes from people & process, 20% from technology, & only 10% from algorithms. Board of Innovation puts it even more bluntly: for every dollar invested in technology, invest two in people & change management.

From Super-Doer to Super-Curator

The role of the facilitator is shifting. The old model was the "Super-Doer" — the person who created all the content, managed all the logistics, & delivered all the insights. The new model is the "Super-Curator" — someone who orchestrates the blend of human & AI capabilities, curates the best outputs from both, & applies judgment to determine what actually serves the learner.

This isn't about doing less. It's about doing different. The Super-Curator needs to understand AI capabilities deeply enough to know when to use them & when to trust the room. They need to develop what the research calls "governed autonomy" — moving fast while maintaining safety rails.

Speed vs. Direction: What We Should Actually Measure

As Seth Godin puts it: "The number on the car's speedometer isn't always an indication of how fast you're getting to where you're going. You might, after all, be driving in circles, really quickly." Most organizations are measuring the wrong things. They track tasks per hour, reports generated, emails processed, meetings attended. What they should be measuring: decisions per day, impact per quarter, problems prevented, & judgment applied.

The Four Levels of AI Adoption

Level 1 — Awareness: Knowing AI exists. Experimenting individually. Most organizations are here.

Level 2 — Solo Productivity: Copilots for tasks. Personal efficiency gains. This is where the hype lives.

Level 3 — Team Transformation: Workflows redesigned. Roles evolving. Cross-functional collaboration with AI.

Level 4 — Systemic Change: Business model transformation. New value creation. This is where the real ROI lives — and almost nobody is here yet.

Fast + Safe: Governed Autonomy in Practice

The "Winner's Circle" is what the research calls Governed Autonomy — moving fast while maintaining safety rails. DBS Bank exemplifies this with 800+ production AI models across 350 use cases, built through monthly "north star & feedback" sessions that flag cultural, skill, & tooling frictions, cross-functional "mini-squads" assigned to attack the biggest challenge each cycle, & bank-wide playbooks that codify what works.

AI-Augmented Apprenticeships

Here's the paradox: AI is automating the entry-level tasks that traditionally built judgment. The tasks that taught junior employees how to think — data analysis, report writing, research synthesis — are now done by machines. The opportunity is to redesign the leadership path: accelerate exposure to complex decisions, create AI-augmented apprenticeships, build judgment through guided practice, & measure growth in decision quality.

"If AI does to white-collar work what globalization did to blue-collar, we need to confront that directly. Not with abstractions about 'the jobs of tomorrow,' but with a credible plan."

— Larry Fink, CEO, BlackRock — Davos 2026

What This Means for Your Practice

Measure decision quality, not just productivity. Track whether AI-augmented decisions lead to better outcomes, not just faster ones.

Build judgment-development into every workshop. Create moments where participants practice evaluating AI outputs, not just generating them.

Teach the "rework trap." Help participants understand that uncritical AI use often creates more work, not less.

Model the Super-Curator role. Show participants what it looks like to orchestrate human-AI collaboration, not just delegate to AI.

Pause & Reflect

Think about the last three decisions your team made using AI-generated data. In how many of those did someone stop to question the output before acting on it?

Where in your organization is speed being rewarded when judgment should be?

Experiment to Try

The Judgment Audit

Take one AI-generated deliverable from the past week. Have three people independently evaluate it for accuracy, bias, & missing context. Compare notes. The gaps between what each person caught — & what nobody caught — are your curriculum.

People learning and collaborating — hand-drawn illustration
Workshop Design

Designing AI-Augmented Workshops: From ADDIE to the AI Stack

Hardman's AI Stack, Mollick's co-editing experiments, NNGroup's practical tips, and Board of Innovation's zero-based redesign. The frameworks that change how you build sessions.

The ADDIE Model, Rewired

Dr. Philippa Hardman's research shows that AI is "quietly rewriting the ADDIE model — in a good way." Rather than replacing the Analysis-Design-Development-Implementation-Evaluation framework, AI becomes a partner in each phase. The key insight from her work: different AI models are better at different tasks.

The AI Stack for Instructional Design

Hardman's "AI Stack" framework proposes matching specific AI models to specific instructional design tasks:

Research Phase

Use tools like Perplexity, Consensus, & Elicit for evidence-based research. Purpose-built for finding & synthesizing academic literature.

Creative Phase

Use Claude or Gemini for brainstorming, ideation, & creative content generation. These models excel at divergent thinking.

Critical Thinking Phase

Use reasoning models (o-series, Claude Opus) for analysis, evaluation, & quality assurance. These excel at convergent thinking.

Execution Phase

Use fast, efficient models for formatting, templating, and production tasks. Speed matters here more than depth.

"Rather than trying to force one model to do everything, AI works best as a copilot for Instructional Design when you switch models as the job changes."

— Dr. Philippa Hardman, DOMS Newsletter

The FRAME Workflow

Hardman also proposes the FRAME workflow — a 5-step process that turns "ad-hoc AI adoption & unstructured prompting into a repeatable, evidence-based process that mirrors how expert instructional designers actually work." The outputs are "reliable, explainable, auditable & backed by research — not guesswork."

Ethan Mollick's Co-Editing Approach

Wharton professor Ethan Mollick, who required AI use in his MBA classes, found that the most effective approach is co-editing — students & AI collaborating iteratively to improve work. This isn't about AI generating & humans accepting. It's about a genuine back-and-forth where human judgment shapes & refines AI output at every step.

Nielsen Norman Group's Practical Tips

Prepare an AI Warm-up Activity. Start with a simple, fun activity that builds familiarity & troubleshoots technical issues early.

Upload Files for Quick Context. Provide AI with company info, user personas, & project details to get relevant outputs.

Create Custom GPTs. For recurring workshops, pre-load a custom AI with all necessary context, constraints, & guidelines.

Plan More Time for Reading. AI generates ideas quickly, but participants need time to read, analyze, & iterate on outputs.

The Creative AI Strategy Matrix

From Board of Innovation's "Age of Creative AI" research: the fundamental shift is from creators to editors. 48% of code on GitHub is now written by AI copilots. The Creative AI Strategy Matrix maps your organization's position across two axes: AI capability maturity & creative integration depth. Most organizations are stuck in the bottom-left quadrant — using AI for basic automation while missing the creative amplification opportunity.

Zero-Based AI Workflow Redesign

Board of Innovation's AI-First Playbook introduces a concept worth sitting with: don't just add AI to existing workflows — redesign from zero. Start with the desired outcome & work backward, asking "If we were building this from scratch today, with AI as a native capability, what would it look like?" Fundamentally different from the incremental approach most organizations take — & where the 10x improvements come from.

The W.I.S.E. A.T. A.I. Framework

From Angelo Biasi's ATD research, the W.I.S.E. A.T. A.I. framework provides a practical structure for integrating AI into any workflow: Work identification, Intelligent tool selection, Streamlined processes, Evaluation loops, Adaptive Training, & AI-augmented Implementation. Designed to help L&D teams move from ad-hoc AI experimentation to systematic integration.

The Human-AI-Human Framework

One of the most practical frameworks from the ATD research: every AI-augmented workflow should follow a Human → AI → Human pattern. A human defines the problem & sets constraints. AI generates, analyzes, or processes. A human evaluates, refines, & makes the final decision. Simple pattern. Prevents both over-reliance on AI & under-utilization of its capabilities.

Pause & Reflect

Pick your most-repeated workshop. If you rebuilt it from zero today — AI-native from the start — what would you cut, what would you add, & what would change entirely?

When was the last time a participant in your workshop produced something better than what you could have created alone? What conditions made that possible?

Experiment to Try

The AI Stack A/B Test

Take one module from an existing workshop. Run it twice — once the traditional way, once using Hardman's AI Stack (different AI models for research, creative, critical thinking, & execution phases). Document the difference in output quality & time spent. The comparison is your business case.

Cognitive Science

The Cognitive Paradox: When AI Helps Learning and When It Hurts

Students using ChatGPT scored 11 points lower on retention tests. The mechanism is called cognitive offloading, and it has implications for every training program you run.

The Uncomfortable Evidence

A 2025 randomized controlled trial by Barcaui found that students who used ChatGPT as a study aid scored 11 percentage points lower on a knowledge retention test compared to those who used traditional study methods (57.5% vs. 68.5%). A separate study of 666 participants found a significant negative correlation between frequent AI tool usage & critical thinking abilities.

This is what researchers are calling the "cognitive paradox of AI in education" — AI can enhance learning in some contexts while simultaneously creating cognitive dependency & eroding critical thinking skills in others.

"Once you start to know what your mind can do that's so much better than AI, it kind of makes sense that some tasks are well-relegated to AI and other tasks are not."

— Tina Grotzer, Harvard Graduate School of Education

Cognitive Offloading: The Hidden Risk

The mechanism is called cognitive offloading — when learners rely on external AI tools, it reduces active recall & problem-solving, which are crucial for cognitive development. The generation effect in learning science tells us that information we actively generate (even with effort & errors) is retained far better than information we passively receive.

Harvard's Ying Xu frames it as two dimensions: "One is what they learn, like the facts and the information. And the other one is their ability to learn. And those are really the foundational capacities that allow students to acquire new knowledge and skills in the future." AI can help with the first while quietly undermining the second.

Desirable Difficulties and Bloom's 2-Sigma Problem

Learning science has long established that "desirable difficulties" — the struggle involved in learning — are essential for building expertise. AI's ability to smooth away those difficulties is a double-edged sword. As one senior banker told HBR: "If my young analysts never struggle, and put in those long hours that I did, will they ever learn to think?"

The flip side: Bloom's famous "2-sigma problem" showed that one-on-one tutoring produces two standard deviations of improvement over classroom instruction. AI tutoring systems show promise — a Brookings Institution review found that "generative AI-powered tutoring platforms can hold numerous benefits for students — if designed responsibly." That qualifier is doing a lot of heavy lifting.

What This Means for Facilitators

Design for active generation, not passive consumption. Use AI to create prompts & challenges, not to provide answers.

Build in "cognitive forcing functions." Harvard's research on these interventions shows they can disrupt heuristic reasoning & encourage analytical thinking even when AI is present.

Use the "traffic light" framework. Harvard Kennedy School's approach: Green (unrestricted AI), Yellow (limited AI), Red (no AI) — applied deliberately to different learning activities.

Teach metacognition alongside AI skills. Help learners understand when AI is helping them learn & when it's helping them avoid learning.

Pause & Reflect

When you use AI to draft something, do you edit it or accept it? Be honest. Now ask: what did your brain skip by not writing the first draft?

If struggle is essential for learning, what are you doing to preserve productive struggle in your AI-augmented programs?

Experiment to Try

The Delayed AI Test

Design a 30-minute learning activity two ways: Version A gives participants full AI access from the start. Version B gives AI access only after they've spent 15 minutes working without it. Test retention 48 hours later. The difference will change how you design everything.

Human Factors

Psychological Safety in the Age of AI: The Edmondson-SCARF Framework

AI generates confident-sounding nonsense at scale. Edmondson's 2026 HBR research names the problem — and the facilitation approach that addresses it.

AI Creates Trust Ambiguity

Amy Edmondson's latest research with Jayshree Seth, published in Harvard Business Review in February 2026, identifies a new challenge: AI creates "trust ambiguity" by providing confident but incorrect information. This undermines the psychological safety teams need to function effectively. When an AI system confidently presents wrong information, team members who question it risk looking like they're being difficult — the exact dynamic psychological safety is supposed to prevent.

"The future of work isn't about choosing between human intelligence and artificial intelligence — it's about building teams that allow both to contribute to their fullest potential."

— Jayshree Seth & Amy C. Edmondson, Harvard Business Review, 2026

The SCARF Model Applied to AI Adoption

David Rock's SCARF model — Status, Certainty, Autonomy, Relatedness, Fairness — provides a neuroscience-based framework for understanding why AI adoption triggers such strong emotional responses:

Status (Threat): "Will AI make my expertise irrelevant?" Employees fear losing their professional identity & standing. (Reward): Position AI mastery as a new form of expertise that elevates status.
Certainty (Threat): "What will my job look like in 6 months?" The pace of AI change creates chronic uncertainty. (Reward): Provide clear roadmaps and regular updates on how AI will be integrated.
Autonomy (Threat): "Am I being forced to use tools I don't trust?" Mandated AI adoption removes choice. (Reward): Give people agency in how and when they adopt AI tools.
Relatedness (Threat): "Is AI replacing the human connections in my work?" AI can reduce opportunities for the interactions that build trust. (Reward): Design AI integration that enhances, not replaces, human collaboration.
Fairness (Threat): "Is AI being used to monitor & evaluate me?" Algorithmic management creates surveillance anxiety. (Reward): Be transparent about how AI data is used & ensure equitable access.

The De-Stressing Imperative

Research from the American Psychological Association shows that AI anxiety is real & measurable. Fred Oswald, PhD notes: "Advances in AI are happening rapidly in the workplace, and many of their effects are uncertain. Will AI empower employees and organizations to be more effective? Or consistent with employee worries, will AI replace their jobs? We're likely to see both."

The facilitator's role is to create space where people can process these fears honestly. The Acknowledge-Align-Act framework provides a practical structure: Acknowledge the legitimate concerns, Align on shared values & goals, then Act on specific, manageable next steps.

Google's Project Aristotle Implications

Google's landmark Project Aristotle research identified psychological safety as the single most important dynamic of effective teams. In the AI era, this finding becomes even more critical. Teams need to feel safe to question AI outputs, admit they don't understand how a tool works, & voice concerns about AI's role in their work — without fear of being seen as "resistant to change."

Pause & Reflect

If you announced tomorrow that AI would be integrated into every team workflow, who in your organization would feel threatened — & have you talked to them?

What's the difference between 'AI-safe' (the technology works) and 'AI-psychologically-safe' (people feel okay using it imperfectly)?

Experiment to Try

The AI Confession Round

In your next team meeting, try this: each person shares one time they used AI & it went wrong, or one thing they're afraid AI will change about their job. No fixing, no reassuring. Just listening. Notice what happens to the room. That shift in energy? That's psychological safety being built.

Personalization

Deep Individualized Personalization: The Promise and the Evidence

Bloom's 2-sigma problem is 40 years old. AI claims to solve it. The evidence is more nuanced than the vendors suggest.

The Promise

The holy grail of corporate training has always been personalization at scale. Benjamin Bloom's famous 1984 study showed that one-on-one tutoring produces two standard deviations of improvement over classroom instruction — the "2-sigma problem." AI promises to close that gap by delivering individualized learning paths, adaptive assessments, & real-time feedback to every learner simultaneously.

What the Evidence Actually Says

A comprehensive 2025 review by Tan et al. in the journal Computers & Education found that AI-powered adaptive learning platforms can produce "small-to-moderate achievement gains" — but the credibility of these gains is "often contested, with uneven benefits across learners & settings." The gains are most pronounced when recommendations respond to learners' current mastery rather than offering uniform digital practice.

A separate study by Jamali et al. (2025) in Frontiers in Computer Science found that AI-driven adaptive features on many platforms are "often subtle and minimally impactful from the user's perspective." The technology is improving rapidly, but we're not yet at the point where AI personalization consistently outperforms well-designed human-led instruction.

Where Personalization Actually Works

Pre-workshop assessment. AI can analyze learner backgrounds, skill levels, & learning preferences to help facilitators customize content before the session even begins.

Adaptive practice and simulation. Tools like Yoodli and Second Nature provide personalized roleplay scenarios that adapt to each learner's performance in real time.

Post-workshop reinforcement. AI-powered spaced repetition & follow-up can combat the Ebbinghaus forgetting curve (90% of learning forgotten within a week without reinforcement).

Learning path curation. AI can recommend next steps based on individual performance, creating personalized development journeys at scale.

The Hilton Case Study

One of the most compelling enterprise examples: Hilton Hotels rolled out an AI-powered virtual reality training program for front desk staff that reduced training time from four hours to 20 minutes while maintaining quality — & scaled to over 400,000 employees. The key: the AI personalized the pace & scenarios to each learner's needs.

Pause & Reflect

Your learners have wildly different starting points. How much of your current program acknowledges that — & how much pretends everyone starts at the same place?

If you could give every learner a personal tutor who knew their exact gaps, what would that tutor do differently from your current program?

Experiment to Try

The Three-Lane Highway

Take one topic from your curriculum. Use an AI tool to generate three versions: beginner, intermediate, & advanced. Give learners a 5-question diagnostic & route them to the right version. Track completion rates & satisfaction vs. your one-size-fits-all version. The data makes the case.

Simulation & Practice

AI Simulation and Practice Tools: Yoodli, Mursion, and the New Rehearsal Space

Yoodli, Mursion, Strivr, Second Nature — the tools that let people fail safely, practice repeatedly, and build skills that lectures never could.

Why Simulation Matters More Than Ever

Nick Shackleton-Jones, one of the most respected voices in learning design, argues that "learning without experience often results in engagement without retention." His Affective Context Model posits that learning is fundamentally about attaching emotional significance to information — & simulation creates the emotional context that lectures cannot.

AI-powered simulation tools are making it possible to practice difficult conversations, high-stakes presentations, & complex interpersonal scenarios at scale, without the cost & scheduling challenges of human role-play partners.

"The future of coaching is here. The perfect augmentation to human coaching."

— Bryan Ackermann, Head of AI Strategy, Korn Ferry

The Tools

Yoodli

AI-powered communication coaching and roleplay. Provides real-time feedback on speaking skills, with enterprise customers like Snowflake, Korn Ferry, & Google reporting significant time & cost savings. Particularly strong for sales conversations, presentations, & interview prep.

Mursion

Immersive learning platform using AI combined with human-in-the-loop simulations. Specializes in soft skills training — difficult conversations, DEI scenarios, & leadership coaching. The human element ensures nuanced, realistic interactions.

Strivr

Enterprise XR (extended reality) platform for immersive training. Used by Walmart, Verizon, & BMW for frontline employee training. Particularly effective for scenarios that benefit from spatial awareness & physical context.

Second Nature

AI-powered sales training platform. Creates realistic AI "buyers" that sales reps can practice with. Adapts scenarios based on performance & provides detailed feedback on technique.

Cogito (Verint)

Real-time AI coaching for contact center agents. Analyzes conversation dynamics & provides live guidance on empathy, engagement, & effectiveness during actual customer interactions.

Pause & Reflect

Which of these simulation tools would address the biggest skill gap on your team right now? Not the most interesting tool — the most needed one.

Nick Shackleton-Jones says learning without experience is engagement without retention. How much of your current training is experience-based vs. content-based?

Experiment to Try

The Before & After

Record yourself giving a 3-minute pitch or presentation. Then practice the same pitch 5 times with Yoodli's AI roleplay. Record yourself again. Compare your first recording to your sixth. The improvement curve is the business case for simulation-based training.

Enterprise Strategy

Reskilling at Scale: What McKinsey, BCG, and Deloitte Actually Recommend

44% of on-the-job skills disrupted by 2030. 85% of leaders expect a surge in development needs. What McKinsey, BCG, and Deloitte recommend doing about it.

The Scale of the Challenge

The World Economic Forum estimates that 44% of on-the-job skills will be disrupted by 2030. LinkedIn states that 70% of all job-related skills go out of date each year. Gartner reports that 85% of business leaders agree there will be a "surge in skills development needs" in the next three years. This isn't a future problem. It's a now problem.

McKinsey's Three Dimensions of AI Upskilling

McKinsey's framework identifies three dimensions organizations need to address simultaneously:

1. AI Literacy: Basic understanding of what AI is, what it can do, & how it works. The foundation — everyone needs it.
2. AI Adoption: Practical skills for using AI tools in daily work. Role-specific & requires hands-on practice.
3. AI Domain Transformation: The ability to reimagine entire workflows & business processes around AI capabilities. Where the real value lives — & where most organizations are weakest.

"Leaders who try to specify precisely how AI should be implemented across their organizations often find themselves building yesterday's solutions for tomorrow's problems."

— McKinsey, Superagency in the Workplace, 2025

BCG's "Future-Built" Companies

BCG's 2025 global study found stark differences between AI leaders and laggards. Future-built companies plan to upskill more than 50% of employees on AI, compared with 20% for laggards. Critically, 88% of managers in future-built companies role-model AI use, versus just 25% at laggards. Leadership modeling is the single strongest predictor of successful AI adoption.

Board of Innovation's Three Waves

Board of Innovation's framework describes AI adoption in three waves: Efficiency (doing the same things faster), Quality (doing the same things better), and Transformation (doing entirely new things). Most organizations are stuck in Wave 1, optimizing for speed without rethinking what they're doing. The real value is in Wave 3 — but it requires the kind of deep judgment & creative thinking that only well-trained humans can provide.

AI Fluency: The Five Domains

Board of Innovation's AI Fluency Playbook defines five domains that every employee needs to develop:

1. AI Awareness: Understanding what AI is, what it can & can't do, & how it's changing your industry.

2. AI Application: Knowing which tools to use for which tasks, & how to evaluate AI outputs critically.

3. AI Ethics: Understanding bias, privacy, transparency, & the responsible use of AI in professional contexts.

4. AI Strategy: Connecting AI capabilities to business outcomes & organizational transformation.

5. AI Leadership: Modeling AI use, building AI-ready teams, & driving cultural change.

The playbook is blunt: "No amount of data scientists can compensate for a leadership team that doesn't understand how AI creates value."

The 3R Framework for AI Strategy

From Board of Innovation's AI Strategy Framework: Reimagine, Realign, Reinvent. Reimagine your value proposition with AI capabilities. Realign your operating model to leverage human-AI collaboration. Reinvent your business model to create new forms of value. The framework includes a 14-dimension AI Implementation Canvas that maps across strategy, operations, people, & technology.

Adoption Models: Rogers, Gartner, and TAM

Understanding how technology adoption actually works is critical for L&D professionals leading AI rollouts. Rogers's Diffusion of Innovations explains why you need to include skeptics in your innovation teams — they stress-test ideas & prevent groupthink. The Gartner Hype Cycle helps you set realistic expectations about where AI tools are in their maturity curve. The Technology Acceptance Model (TAM) reminds us that adoption depends on two factors: perceived usefulness & perceived ease of use. If your people don't see the value or find the tools too hard, no amount of mandates will drive adoption.

Measuring What Matters: Innovation Per Employee

From Angelo Biasi's ATD research: the metric that matters most isn't tasks per hour or courses completed — it's innovation per employee. How many new ideas, process improvements, & creative solutions is each person generating? AI should be amplifying human creativity & problem-solving, not just automating routine tasks. If your AI adoption isn't increasing innovation per employee, you're optimizing the wrong thing.

Josh Bersin's Learning Maturity Model

Bersin's four-level model describes the evolution of corporate learning:

Level 1: Static Training — Compliance-based, top-down, one-size-fits-all.
Level 2: Scaled Learning — Broader portfolio of tools and formats.
Level 3: Integrated Development — Tailored programs around roles and career paths.
Level 4: Dynamic Learning Everywhere — AI-powered, personalized, embedded in workflow.

Pause & Reflect

Of the three waves — Efficiency, Quality, Transformation — which one is your organization stuck in? What would it take to move to the next?

BCG found that 88% of managers in AI-leading companies role-model AI use. Does your leadership team use AI visibly — or just talk about it?

Experiment to Try

The AI Fluency Map

Map your team's AI fluency across Board of Innovation's five domains (Awareness, Application, Ethics, Strategy, Leadership). Score each person 1-5. The pattern will show you exactly where to invest your next training dollar. Bonus: do it again in 90 days & measure the shift.

Ethics & Policy

AI Ethics and Policy for Training Environments

38 U.S. states passed AI measures in 2025. The EU AI Act is live. Your training environment needs a policy, and here's what it should cover.

The Regulatory Landscape Is Moving Fast

The EU AI Act is now the world's first comprehensive AI regulation, categorizing AI systems by risk level. The Colorado AI Act requires risk assessments, bias audits, & consumer disclosures for high-risk AI systems. 38 U.S. states passed AI-related measures in 2025 alone. For L&D professionals, this means AI policies for training environments aren't optional — they're a compliance requirement.

Key Ethical Concerns in Training

Bias in AI-Generated Content: AI models reflect the biases in their training data. Training content generated by AI may perpetuate stereotypes, exclude perspectives, or present culturally narrow viewpoints.
Data Privacy: Learning platforms collect sensitive data about employee performance, knowledge gaps, & learning behaviors. 80% of business leaders see AI explainability, ethics, bias, or trust as a major roadblock to generative AI adoption (IBM).
Algorithmic Management: When AI is used to track, evaluate, & manage learner performance, it can create surveillance dynamics that undermine psychological safety & intrinsic motivation.
Intellectual Property: Who owns content generated by AI during a workshop? What happens when AI tools are trained on proprietary company information?

Frameworks for Responsible AI in L&D

The CIPD and IFOW have developed the 6Rs Framework — a people-centered approach to AI implementation: Reveal, Reflect, Reimagine, Realise, Realign, Review. Deloitte's Trustworthy AI Framework focuses on ensuring AI systems are secure, ethical, & transparent. The OECD AI Principles promote innovative yet trustworthy use of AI that respects democratic norms.

Cutting Through "Agent Washing"

From Markus Bernhardt's ATD research: "agent washing" is the new greenwashing — vendors slapping the "AI agent" label on what are essentially glorified chatbots. Bernhardt's framework for evaluating AI agents asks three critical questions: Does it actually take autonomous action, or just generate text? Does it have access to real tools & data, or is it sandboxed? Can it learn from outcomes & improve, or does it reset every session? Most "AI agents" in the L&D space fail all three tests.

The Inspect & Verify Framework

Bernhardt proposes a practical evaluation framework for L&D teams assessing AI tools:

Inspect the Claims: What does the vendor say the AI can do? Get specific. Ask for demonstrations with your actual use cases, not curated demos.

Verify the Architecture: Is it a true agent (autonomous, tool-using, learning) or a wrapper around an LLM API? Ask about the underlying models, data handling, & integration capabilities.

Test the Boundaries: Where does it fail? Every AI system has failure modes. A vendor who can't tell you where their system breaks down doesn't understand their own product.

Evaluate the ROI: What's the actual time/cost savings versus the subscription cost? Factor in the learning curve, integration effort, & ongoing maintenance.

Design Thinking for AI Implementation

From ATD's research on Design Thinking for AI: applying the Design Thinking framework (Empathize, Define, Ideate, Prototype, Test) to AI implementation prevents the common mistake of starting with the technology & looking for problems to solve. Instead, start with the learner's actual pain points, define the specific problem AI should address, ideate multiple approaches, prototype quickly, & test with real users before scaling.

Building Your AI Policy

Every L&D team needs a clear AI policy that covers: which AI tools are approved for use, how learner data is collected & protected, how AI-generated content is reviewed for bias, what disclosure is required when AI is used in training, & how the policy will be updated as the technology evolves.

Pause & Reflect

If a participant in your workshop asked 'Who's responsible when the AI gets it wrong?' — do you have an answer?

How many of your current AI vendors could pass Bernhardt's five-question test? Have you actually asked them?

Experiment to Try

The One-Page AI Policy

Take one AI tool your team uses regularly. Run Bernhardt's Inspect & Verify framework on it: inspect the claims, verify the architecture, test the boundaries, evaluate the ROI. Write up your findings in one page. Share it with your team. You now have the beginning of an AI policy.

Human Skills

The Human Skills That Matter More, Not Less

AI can generate content. It cannot read a room. Stanford's research on when human judgment outperforms algorithms — and what that means for your practice.

The Paradox of AI and Human Skills

Here's the paradox that HBR keeps coming back to: the more AI is integrated into workflows, the more crucial human skills like problem framing, collaboration, & creativity become. AI can generate content, but it can't read a room. It can analyze data, but it can't build trust. It can suggest answers, but it can't ask the question that changes everything.

"AI won't make teaching obsolete. It may, however, render disengaged or uncritical teaching practices obsolete."

— Tari Tan, Harvard Medical School

There's a deeper truth here that Joseph Joubert captured centuries ago: "To teach is to learn twice." In the age of AI, this is more relevant than ever. When facilitators teach others how to work with AI, they deepen their own understanding. When participants explain their AI-assisted reasoning to peers, they solidify their learning. The act of teaching — of translating knowledge into something another person can understand — remains one of the most powerful learning mechanisms we have. AI cannot replicate it.

MIT's EPOCH Framework

MIT Sloan's EPOCH framework identifies five human capabilities that complement AI's shortcomings — the skills that become more valuable, not less, as AI becomes more capable. The capabilities facilitators need to develop in themselves & in their participants.

The Art of Powerful Questions

AI can provide answers, but it takes a human to ask the right questions. The facilitator's superpower is the "uncomfortable question" — the one that challenges assumptions & sparks genuine dialogue:

"What are we choosing not to do by pursuing this path?"

"If the AI is wrong here, what's the cost of not catching it?"

"What does this room know that no AI model has been trained on?"

"Who in this room has the most to lose if this decision is made?"

What Stanford's Research Shows

Jann Spiess at Stanford Graduate School of Business found that a "complementary" algorithm — one that provides recommendations only when a human is likely to be uncertain or incorrect — leads to the most accurate decisions, outperforming both purely predictive algorithms & unassisted human decisions. The implication for facilitators: the goal isn't to use AI for everything, but to know exactly where human judgment adds the most value.

"We shouldn't be afraid of it. We should learn how to use it and leverage those productivity gains — not as a substitute for things people are good at, like critical thinking and coming up with new ideas."

— Lawrence Schmidt, MIT Sloan

AI Coaching: Promise and Peril

From Ryann K. Ellis's ATD research on AI coaching: AI coaching tools are proliferating rapidly, but the research is clear — AI coaching works best as a supplement to human coaching, not a replacement. The Conference Board found that AI coaching can increase coaching access from the typical 10-15% of employees to potentially 100%, democratizing what was previously an executive-only benefit. But the human coach's ability to read emotional subtext, challenge assumptions with empathy, & hold space for vulnerability remains irreplaceable. The sweet spot: use AI for practice, preparation, & reinforcement between human coaching sessions.

Fighting Fear with Information

One of the most practical insights from the ATD research: "Fight fear with information." When employees are anxious about AI replacing their jobs, the worst response is to ignore the anxiety or dismiss it. The best response is radical transparency — share what AI can and can't do, show specific examples of how it augments rather than replaces, and give people hands-on experience so they can form their own informed opinions. Fear thrives in information vacuums. Fill the vacuum.

Essential Reading

Co-Intelligence

Ethan Mollick

Living & working with AI — from the Wharton professor who required it in his MBA classes.

The Fearless Organization

Amy C. Edmondson

The definitive guide to psychological safety — more relevant now than when she wrote it.

The Art of Gathering

Priya Parker

How to create gatherings that actually matter. Changed how I think about rooms.

How People Learn II

National Academies

The science of learning, memory, & transfer — essential for evidence-based facilitation.

Pause & Reflect

What question could you ask in your next meeting that no AI would think to ask — because it requires reading the room, not the data?

Joseph Joubert said 'To teach is to learn twice.' When was the last time you learned something by teaching it? What made that different from learning it alone?

Experiment to Try

The Human-Only Challenge

In your next workshop, replace one AI-generated exercise with a human-only challenge: participants must solve a problem using only conversation, whiteboard, & their collective experience. No screens. Time it. Then run the same problem with AI assistance. Compare not just the output quality, but the energy in the room.

ATD Framework

From Order-Taker to Strategic Learning Architect

Debbie Richards' three pillars — Enablement, Curation, Governance — and why L&D's survival depends on moving from order-taker to architect.

The Identity Crisis in L&D

For decades, L&D professionals have been treated as order-takers: "We need a course on X." AI is forcing a reckoning with that model. As Debbie Richards argues in ATD Magazine, the shift isn't just about using AI tools — it's about repositioning L&D as a strategic function.

"The shift from order-taker to strategic learning architect isn't optional anymore — it's survival."

— Debbie Richards, ATD Magazine

The Three Pillars

Richards identifies three pillars for the strategic learning architect:

1. Enablement: Equipping teams with AI tools & the judgment to use them effectively. Not just training on tools — building the critical thinking to evaluate AI outputs.

2. Curation: Becoming the trusted filter. In a world drowning in AI-generated content, the curator who separates signal from noise is invaluable.

3. Governance: Establishing the guardrails. Who reviews AI-generated training? What's the quality standard? How do we ensure accuracy & avoid bias?

The Human-AI-Human Framework

Richards' most practical contribution is the Human-AI-Human workflow: humans define the strategy & requirements, AI generates & processes, humans review, refine, & apply judgment. Not just a workflow — a philosophy that keeps humans in the driver's seat while leveraging AI's speed.

The R-T-C-F Prompting Framework

For L&D professionals learning to work with AI, Richards offers the R-T-C-F framework for effective prompting:

R — Role: Tell the AI what role to play ("You are an instructional designer specializing in...")

T — Task: Define the specific task clearly

C — Context: Provide relevant background and constraints

F — Format: Specify the desired output format

The 10x Workforce Multiplier

Meanwhile, Angelo Biasi's research in ATD Magazine introduces the "Ultimate Skill Stack" — the combination of AI literacy, human intelligence, & systemic innovation that creates 4-10x productivity multipliers. The key insight: it's not about replacing humans with AI — it's about stacking human & AI capabilities to create something neither could achieve alone.

Pause & Reflect

When someone in your organization says 'We need a course on X,' do you build the course — or do you ask 'What problem are we actually solving?'

Of Richards' three pillars — Enablement, Curation, Governance — which one is your team strongest in? Which one are you avoiding?

Experiment to Try

The Problem Behind the Request

Pick one request that came in this month ('We need training on...'). Instead of building it, apply the Human-AI-Human framework: define the real problem (human), generate three possible solutions with AI, then evaluate which one actually addresses the root cause (human). Present the analysis instead of the course.

Vendor Evaluation

Cutting Through the Hype: How to Evaluate AI Agents and Vendors

Most 'AI agents' are glorified chatbots. Bernhardt's five-question framework for figuring out which ones are real.

The Agent Washing Problem

Every vendor now claims to have "AI agents." Markus Bernhardt's analysis in ATD Magazine reveals the uncomfortable truth: most of what's marketed as "agentic AI" is really just automated workflows with a chatbot interface. The term "agent washing" — like greenwashing, but for AI capabilities — is becoming a serious problem for L&D buyers.

"If a vendor can't explain what their 'AI agent' does differently from a well-designed automation, that's your answer."

— Markus Bernhardt, ATD Magazine

The Five-Question Framework

Bernhardt provides five questions every L&D professional should ask before buying any AI tool:

1. "Can I see it handle an edge case?" — Real AI adapts to unexpected inputs. Scripted automation breaks.

2. "What happens when it's wrong?" — How does the system handle errors? Is there a human review loop?

3. "Show me the training data." — What was the AI trained on? Is it relevant to your industry and context?

4. "What can't it do?" — Any vendor who says "everything" is lying. Honest vendors know their limitations.

5. "What does the human still need to do?" — The best AI tools are clear about where human judgment is required.

The AI Coaching Reality Check

ATD's analysis of AI coaching tools found that while AI coaching can be remarkably effective for specific skills (like presentation practice with Yoodli), it works best as a complement to human coaching, not a replacement. The key: AI coaching provides the reps & data, human coaches provide the context & emotional intelligence.

Karl Kapp's Reality Check

Karl Kapp's 2026 analysis provides essential context: AI agents currently complete only about 30% of tasks autonomously. The rest still requires human oversight. Not a failure of AI — just the reality of where the technology actually is, versus where the marketing says it is.

Pause & Reflect

How many AI tools is your organization currently paying for? How many are people actually using weekly? The gap between those two numbers is your real adoption rate.

Kapp says AI agents complete only 30% of tasks autonomously. Does your vendor's marketing match that reality?

Experiment to Try

The Vendor Reality Check

Pick your most expensive AI subscription. Spend one hour testing it with Bernhardt's five questions. Document what it actually does vs. what the sales deck promised. Calculate the real ROI based on actual usage, not projected usage. Share the findings with whoever signs the check.

Implementation

The AI Implementation Roadmap for L&D Teams

Torrance's go-kart-to-Formula-One approach: start small, learn fast, scale what works. Plus the ATD data on where organizations actually are.

The Go-Kart vs. Formula One Approach

Megan Torrance's upcoming book (releasing June 2, 2026) offers the most practical framework for AI adoption in L&D. Her central metaphor: don't try to build a Formula One car on day one. Start with a go-kart. Get something working, learn from it, iterate.

"The organizations that succeed with AI aren't the ones that go biggest first. They're the ones that go smallest first and learn fastest."

— Megan Torrance, CEO, TorranceLearning

The W.I.S.E. A.T. A.I. Framework

Torrance's W.I.S.E. A.T. A.I. framework provides a structured approach to evaluating and implementing AI in L&D workflows:

W — What is the task you're trying to accomplish?

I — Is AI the right tool for this specific task?

S — Start small with a proof of concept

E — Evaluate the output against your quality standards

A.T. — Adjust and Test iteratively

A.I. — Adopt and Integrate what works into your standard workflow

The ATD Research Reality

The ATD's 2023 research report on AI in Learning and Talent Development found that 42% of organizations don't use AI at all, yet 75% plan to within 2 years. The top barriers? Budget (95%) & lack of skilled staff (94%). The window for L&D professionals to build AI skills & position themselves as strategic partners is right now.

The Pace of Change

For context on why urgency matters: AI 2027, a scenario analysis by former OpenAI researcher Daniel Kokotajlo and Eli Lifland (#1 RAND forecaster), projects that AI capabilities will advance faster than most organizations are preparing for. Their analysis suggests the impact of superhuman AI will exceed the Industrial Revolution. Whether or not you agree with their timeline, the direction is clear: L&D teams that wait to build AI capability will be too late.

Pause & Reflect

What's the smallest possible AI experiment you could run this week that would teach your team something real? Not a pilot program. Not a strategy document. One experiment.

Torrance says start with a go-kart, not a Formula One car. What's your go-kart?

Experiment to Try

Your First Case Study

This week, pick one repetitive task your team does. Apply the W.I.S.E. A.T. A.I. framework: identify the task (W), decide if AI fits (I), build a tiny proof of concept (S), evaluate the output (E), adapt your approach (A.T.), then integrate what works (A.I.). Document the whole thing in 30 minutes. You now have your first case study.

"We keep adding. The field keeps moving."

Got a topic that belongs here? Let us know