Your people are waiting for you to say yes.
Ethan Mollick's research shows that workers are already using AI — and hiding it. They are worried about organizational reaction, not about the technology itself. The permission gap — not the technology — is the largest barrier to AI adoption in your section.
Agenda
| Time | Module | Focus |
|---|---|---|
| 0:00 – 0:05 | Opening & Context | The permission gap, DoW AI Strategy, rapid adoption mandate |
| 0:05 – 0:10 | Permission Culture | What “yes” looks like, what kills adoption, what NOT to do |
| 0:10 – 0:20 | Evaluating AI-Assisted Output | Four questions, practice scenarios, evaluation exercises |
| 0:20 – 0:25 | The Apprentice Problem | Preserving junior development while gaining efficiency |
| 0:25 – 0:30 | Quick Reference & Wrap-Up | Supervisor quick reference card, closing action item |
Module 1: Why This Matters Now 5 min
The data is unambiguous. AI adoption with proper leadership support produces measurable results. AI adoption without it fails consistently.
- Microsoft research: Most workers abandon AI within weeks without organizational support.
- UK government study: 25 minutes per day in savings with proper support structures in place.
- Department of War (DoW) January 2026 AI Strategy: Year of Military AI Dominance — this is the directive, not a suggestion.
- Army: Created a dedicated AI/ML officer career field (49B).
- Marine Corps: Running generative AI workshops at Quantico.
AI is a SITREP item. Not optional anymore.
Module 2: Permission Culture 5 min
What “Yes” Looks Like
- “Try it. Show me what you build.”
- “I don't need to understand how it works. I need to understand what it does.”
- “If it saves time and meets quality standards, we should do it.”
- Protected time for learning.
- Public recognition for useful builds.
What Kills Adoption
- “I need to approve every AI interaction.”
- “Don't use AI for anything official.”
- “We'll wait until there's a formal policy.”
- Punishing experimentation failures.
- Treating AI use as suspicious or lazy.
What Supervisors Should NOT Do
- Don't ban AI use outright. Prohibition drives it underground where you can't guide it. You lose visibility and control.
- Don't require approval for every AI interaction. Creates a bottleneck that kills velocity. Review output, not every query.
- Don't assume AI output is automatically wrong. Skepticism kills adoption. Verify like any work product — evidence-based, not assumption-based.
- Don't skip verification just because the Marine is experienced. Seniority doesn't eliminate hallucination risk. Everyone verifies.
Guard Rails (Not Roadblocks)
- All tools go through the EDD SOP before deployment.
- AI-generated content for official use must be reviewed.
- Sensitive or classified information never enters unauthorized AI systems.
- Failed experiments are shared as learning, not punished.
The default answer should be “yes, with appropriate review.”
Module 3: Evaluating AI-Assisted Output 10 min
When a Marine brings you AI-assisted work, you need exactly four questions:
- Does it work? Can you demonstrate it?
- Is the output accurate? Verifiable facts, references, numbers?
- Does it follow the SOP? Proper review process completed?
- Does it save time? Before-and-after comparison?
You do not need to evaluate how AI produced it. Focus on output, not mechanism. Apply the same standards as any work product — if a Marine handed you this without mentioning AI, would you sign it?
Practice Scenarios
Supervisors must practice evaluation before they need it operationally. Work through these two scenarios. What questions would you ask?
Scenario A: AI-Generated Leave Policy Summary
A Marine presents you with a one-page summary of leave policy for Emergency Leave Authorization. The summary is clear, well-formatted, and cites MCO P1050.3K. The Marine tells you it was generated using ChatGPT and reviewed for accuracy. They want to distribute it to the section.
Suggested Evaluation Questions
- Can you verify the MCO citation? Pull up the actual order. Does the summary match?
- What did you use as the AI input? Did you paste the full MCO text, or ask general questions? Context quality matters.
- Who reviewed this for accuracy? AI-generated policy content requires subject-matter verification, not just proofreading.
- What happens if the policy changes? Is there a process to update this, or will outdated summaries circulate indefinitely?
Scenario B: Automated Reporting Tool
A Marine built a Python script that pulls data from a shared spreadsheet and auto-generates the weekly operations summary. It runs in 30 seconds. The manual process took 90 minutes. The Marine wants approval to use it for official reporting.
Suggested Evaluation Questions
- Can you demonstrate it with live data? Show me input, process, output. Walk through one full cycle.
- What happens when the input data is malformed? Does it fail gracefully, or generate incorrect reports?
- Who else has reviewed this code? Per EDD SOP, automation tools require peer review and security review before operational use.
- How do we verify accuracy each week? Sampling checks, output validation — what's the ongoing QA process?
Module 4: The Apprentice Problem 5 min
The Risk: Junior Marines Who Never Learn Fundamentals
If AI handles all the routine tasks that junior Marines traditionally learned on, they never develop professional judgment. They become dependent on tools they don't understand. Entry-level employment in AI-exposed fields has dropped 13% since 2022. The military cannot afford a generation of Marines who can operate AI but cannot operate without it.
Supervisors Must Ensure AI Augments, Not Replaces, Development
AI is a force multiplier for trained professionals. It is a crutch for untrained juniors. Your role is to ensure the former, prevent the latter.
Three Supervision Checks
- Can the Marine explain the output without referencing AI? If they can't walk you through the logic, they didn't learn — they copied.
- Require periodic tasks to be completed without AI. Preserve baseline competency. If the tool disappears, can they still operate?
- Use AI output as a teaching tool. “Is this correct? How would you verify it? What would happen if this number was wrong?”
Developmental Assignments Are Non-Negotiable
Certain tasks exist to build judgment, not just complete work. Writing after-action reviews, analyzing lessons learned, drafting initial counseling statements — these are formative assignments. AI can assist, but it cannot replace the learning value of the work itself.
Goal: AI-augmented Marines, not AI-dependent ones.
Supervisor Quick Reference 5 min
This is a reference card supervisors can screenshot, print, or save. Use it as a decision aid when evaluating AI-assisted work.
Supervisor Quick Reference: AI-Assisted Work
3 Questions to Ask Before Approving AI-Assisted Work
- Can you demonstrate it working? Live demo with real inputs.
- How did you verify accuracy? Specific checks, not general trust.
- Who reviewed this? Peer review, SME review, security review per SOP.
3 Signs of Healthy AI Use
- The Marine can explain the output without referencing AI. They understand the work, not just the tool.
- Failed experiments are shared openly. Learning culture, not fear culture.
- AI is saving time on routine tasks, not replacing judgment tasks. Force multiplier, not replacement.
3 Warning Signs of Unhealthy AI Use
- The Marine cannot answer basic questions about their own output. Dependency, not augmentation.
- AI use is hidden or apologized for. Sign of a fear culture you created.
- Junior Marines are skipping foundational tasks entirely. No baseline competency being built.
Default Posture: Yes, with appropriate review.
Tools Your Marines Will Use
GenAI.mil is the Marine Corps enterprise AI platform per MARADMIN 018/26. It is IL5-authorized for CUI and should be the default tool for all AI-assisted work on DoD networks. CamoGPT (Army-managed) provides supplementary capabilities including API access and IL6/SIPR support. Commercial tools (ChatGPT, Gemini via their websites) are approved for unclassified work only. ChatGPT, Gemini, and Grok are also available on GenAI.mil. PII must be anonymized on all AI platforms unless a PIA authorizes it. See the Approved Tools page for the full list.
Closing
The EDD program provides structure. Your people have motivation. The Department of War has directed priority. The only thing between your section and measurable productivity gains is your permission. Give it.
Instructor Note
This is the highest-leverage 30 minutes. Respect the time constraint. Lead with operational relevance. End with a specific ask: “Within the next week, ask one person in your section what they'd build if they had permission.”
Knowledge Check
According to research cited in this course, what is the largest barrier to AI adoption in most organizations?
Ethan Mollick's research shows workers are already using AI but hiding it. They worry about organizational reaction, not the technology itself. The permission gap is the largest barrier.
The UK government study on AI adoption found that workers with proper support structures saved approximately how much time per day?
The UK government study found 25 minutes per day in savings when proper support structures were in place. AI adoption with leadership support produces measurable results.
What does the Department of War's January 2026 AI Strategy represent for military units?
The DoW January 2026 AI Strategy declared the Year of Military AI Dominance. This is a directive, not a suggestion. AI is a SITREP item — no longer optional.
Knowledge Check
Which of the following is an example of what “yes” looks like in a permission culture?
“Try it. Show me what you build” is a permission culture statement. The other options are adoption-killing behaviors that create fear cultures and drive AI use underground.
Why should supervisors NOT ban AI use outright?
Prohibition drives AI use underground where you can't guide it. You lose visibility and control. The goal is guard rails (not roadblocks) with appropriate review processes.
What is the recommended default posture for supervisors when evaluating AI use?
The default answer should be “yes, with appropriate review.” This creates a permission culture while maintaining necessary guard rails through the EDD SOP process.
Knowledge Check
When a Marine brings you AI-assisted work, which of the following is one of the four critical evaluation questions?
The four critical questions are: Does it work? Is the output accurate? Does it follow the SOP? Does it save time? Focus on output quality, not on how the AI produced it.
When evaluating AI-assisted output, supervisors should focus on:
You do not need to evaluate how AI produced the output. Focus on output, not mechanism. Apply the same standards as any work product — if a Marine handed you this without mentioning AI, would you sign it?
In the practice scenario where a Marine built an automated reporting tool, what is the FIRST evaluation question a supervisor should ask?
“Can you demonstrate it with live data?” is the first evaluation step. Show input, process, output — walk through one full cycle. Verification starts with demonstration.
Knowledge Check
What is the “apprentice problem” in the context of AI adoption?
If AI handles all routine tasks that junior Marines traditionally learned on, they never develop professional judgment. They become dependent on tools they don't understand. The goal is AI-augmented Marines, not AI-dependent ones.
Which of the following is one of the three supervision checks for preserving junior development?
One of the three supervision checks is requiring periodic tasks to be completed without AI. This preserves baseline competency — if the tool disappears, can they still operate?
Why are developmental assignments like after-action reviews and initial counseling statements considered “non-negotiable” for junior Marines?
Certain tasks exist to build judgment, not just complete work. AI can assist, but it cannot replace the learning value of formative assignments. The goal is AI-augmented Marines, not AI-dependent ones.
Knowledge Check
Which of the following is a sign of HEALTHY AI use in a section?
Failed experiments being shared openly is a sign of a healthy learning culture. When Marines hide or apologize for AI use, that signals a fear culture created by the supervisor.
Which of the following is a WARNING sign of unhealthy AI use?
Junior Marines skipping foundational tasks entirely means no baseline competency is being built. This is dependency, not augmentation. Supervisors must ensure AI augments development, not replaces it.
According to the supervisor quick reference, what are the three questions to ask before approving AI-assisted work?
The quick reference card lists three questions: (1) Can you demonstrate it working? (2) How did you verify accuracy? (3) Who reviewed this? These focus on output quality and process compliance, not on the AI mechanism.
Course Completion Checklist
Check off each item as you complete it. Your progress is saved automatically in your browser.
Course Complete!
You have completed all items in the Supervisor Orientation course. View your progress dashboard or generate a completion certificate.