Instructor Qualification Note
This course requires an instructor who has personally built and deployed at least 3 tools using AI assistance. The complex build in Module 2 requires real-time troubleshooting ability. You cannot effectively teach this course from a script alone. Students will encounter unexpected errors, frontier limitations, and integration failures. The instructor must be able to diagnose problems on the fly, guide students through iterative refinement, and recognize when a problem is a frontier issue versus a context/prompting issue.
Timing Breakdown
| Module | Duration | Cumulative Time |
|---|---|---|
| Module 1: Frontier Mapping | 30 min | 0:30 |
| Module 2: Complex Build | 60 min | 1:30 |
| Break | 10 min | 1:40 |
| Module 3: Group Debugging | 40 min | 2:20 |
| Module 4: Verification & QA | 30 min | 2:50 |
| Module 5: Teaching Methodology | 30 min | 3:20 |
| Module 6: Workflow Playbook & Wrap-Up | 30 min | 3:50 |
| Buffer | 10 min | 4:00 |
Total: 230 minutes (3 hours 50 minutes). The 10-minute buffer accounts for technical issues, extended Q&A, or students who need additional troubleshooting time during the complex build.
Agenda
| Time | Module | 201 Skills Applied |
|---|---|---|
| 0:00–0:30 | Module 1: Frontier Mapping for Your Domain | Frontier Recognition |
| 0:30–1:30 | Module 2: Complex Build (Unit Readiness Dashboard) | All six skills (Context Assembly, Quality Judgment, Task Decomposition, Iterative Refinement, Workflow Integration, Frontier Recognition) |
| 1:30–1:40 | Break | |
| 1:40–2:20 | Module 3: Group Debugging (Real Problems) | Iterative Refinement, Frontier Recognition, Context Assembly |
| 2:20–2:50 | Module 4: Verification Protocols and QA | Quality Judgment, Workflow Integration |
| 2:50–3:20 | Module 5: Teaching Others (Teach-Back Exercise) | Context Assembly, Quality Judgment |
| 3:20–3:50 | Module 6: Workflow Playbook Creation | Workflow Integration, Task Decomposition |
| 3:50–4:00 | Wrap-Up & Next Steps |
Total: 4 hours (includes 10-minute break and 10-minute buffer built into timing)
Instructor Prerequisite Check
Before teaching this course, verify you can do all of the following without AI assistance:
- Build a SharePoint list from a CSV import and complete post-import steps (Person columns, Choice conversion, Calculated columns)
- Write a Power Fx Filter formula with delegation awareness and explain why Search() is not delegable
- Configure a Try/Catch scope pattern in Power Automate with Run After settings
- Explain the difference between Power Fx, WDL, and DAX and when each is used
If you cannot do any of these, complete Course 3: Platform Training before teaching this course. Students will ask questions that require hands-on experience to answer.
Module 1: Frontier Mapping for Your Domain
Duration: 30 minutes
Each participant creates a frontier map for their domain. Map the frontier for each tool separately. What GenAI.mil handles well in your domain may differ from what CamoGPT (Army-managed) handles well. For example, GenAI.mil may excel at generating Power Apps formulas but struggle with complex data transformations that CamoGPT’s dataset conversation feature handles natively.
Frontier Map Format
| Category | GenAI.mil Handles | CamoGPT (Army-managed) Handles | Neither Handles (Human Only) | Moving Frontier |
|---|---|---|---|---|
| Document generation | Correspondence drafts, counseling statements, award write-ups, memo formatting, code generation for Power Platform | Same drafting capabilities, plus shared workspace for collaborative editing (capability evolving) | Documents requiring institutional knowledge (unit SOPs, local policy interpretation), exact regulation quotes | Fitness report narratives (improving rapidly), legal review summaries |
| Data analysis | Trend identification, summarizing datasets, creating chart descriptions, anomaly flagging | Dataset conversation (CSV upload and query), structured data analysis, API-driven data pipelines | Interpreting data in operational context, cross-referencing classified and unclassified sources | Predictive analysis (retention modeling, maintenance forecasting) |
| Process automation | Generating Power Automate flow logic, approval routing code, notification templates | API integrations, tool calling for automated workflows, programmatic AI access | Multi-system integrations with legacy databases, human judgment calls (hardship determinations) | Complex conditional workflows (getting better with clear business rules) |
| Reference lookup | Finding relevant MCOs/NAVMCs, summarizing policy documents, comparing regulation versions | Same capabilities (both platforms support file upload); CamoGPT adds API access for programmatic queries | Interpreting how regulations apply to edge cases, resolving conflicting guidance between orders | Policy applicability questions (models improving but still unreliable for authoritative interpretation) |
| Training development | Lesson plan outlines, quiz generation, scenario creation, slide deck structure | Collaborative lesson plan development in shared workspaces (capability evolving), data-driven training analysis | Evaluating training effectiveness, adapting content for specific MOS requirements, doctrinal accuracy | Full lesson plan generation with appropriate examples (quality varies significantly) |
Issue Tracking Format
As you encounter specific issues during builds, log them in this format. Over time, this becomes your team's institutional knowledge of where AI helps and where it doesn't.
| Issue | Platform | Category | Workaround | Date |
|---|---|---|---|---|
| AI writes forEach instead of Filter | Power Apps | Frontier | Add "Power Fx only, not JavaScript" to prompt | 17 Mar 2026 |
| Trigger sends stale data | Power Automate | Platform limitation | Add Get item action after trigger | 17 Mar 2026 |
| AI suggests Delay for approval timeout | Power Automate | Context | Specify: use Timeout property on approval action (P2D) | 17 Mar 2026 |
Categories: Frontier (AI capability limit), Platform limitation (how the platform works, not an AI issue), Context (AI guessed wrong because prompt was insufficient).
Deliverable: One-page frontier map published on the EDD site.
Key Teaching Point
This map is the most valuable artifact. It prevents the 19-percentage-point performance drop from the BCG-Harvard study. When workers apply AI beyond the frontier without knowing it, quality collapses. The frontier map makes the boundary visible.
Data Handling Reminder
Advanced builds often involve real unit data. CUI is authorized on GenAI.mil and CamoGPT (Army-managed). PII (names, SSNs, contact info) must be anonymized even on IL5 tools unless a PIA authorizes it. Commercial tools (ChatGPT via chatgpt.com, Gemini via gemini.google.com) are for unclassified data only. Note: ChatGPT, Gemini, and Grok are also available on GenAI.mil where they are CUI-authorized.
Module 2: Complex Build — Multi-Component System
Duration: 60 minutes
This is the most complex build in the entire curriculum. Students will build a Unit Readiness Dashboard that pulls from multiple data sources, requires 8-10 sequential prompts, and demonstrates advanced techniques including multi-step prompting, context management, and error recovery. This build should feel significantly harder than anything in Platform Training.
Instructor Note: Mode-Switching is the Goal
The goal is conscious, deliberate mode-switching, not a polished product. Assess students on their decision-making process, not on output quality. Watch for students who understand when to slow down for accuracy-critical work versus when to iterate rapidly. This is the key skill.
Build Goal: Unit Readiness Dashboard for 1st Bn, 99th Marines
Create a Power BI dashboard that consolidates:
- Training status (pulled from Excel training tracker)
- Equipment status (pulled from GCSS-MC export or simulated CSV)
- Personnel status (pulled from Alpha roster or simulated data)
The dashboard should display overall readiness percentage, breakdowns by company, and flag critical shortfalls.
Phase 1: Data Architecture (Centaur Mode — 15 minutes)
Why Centaur: Data schema errors compound throughout the project. A bad data model means rework. Slow down and verify.
Instructor Checkpoint
Before students begin, ask: "Why are we using centaur mode for this phase?" Correct answer: "Because the data model is the foundation. If we get this wrong, everything built on top of it breaks. We need to verify the schema before moving forward."
Step-by-Step Prompting Sequence
Instructor Note: Verification Point
Stop here. Have students manually verify the proposed schema against their actual data sources. Common AI errors: proposing columns that don't exist in the source data, misunderstanding military data structures (e.g., confusing EDIPI with DOD ID), or creating overly complex schemas that won't work with the actual data.
Decision Point: Ask Students
At this point, ask: "The AI proposed renaming columns in your source data. Should you rename the columns in the Excel file, or handle the mapping in Power Query?" Discuss tradeoffs: renaming source data breaks existing workflows; mapping in Power Query adds transformation complexity but preserves source integrity.
Phase 2: Data Ingestion (Cyborg Mode — 15 minutes)
Why Cyborg: Data ingestion involves iteration and trial-and-error. Connection strings fail, file paths break, data types mismatch. Rapid iteration is more valuable than slow verification.
Common Error: This Will Likely Fail
Students will copy-paste the M code and it will fail. This is expected. Common causes: file path has spaces and isn't escaped, Excel sheet name is wrong, column names have special characters. This is a teaching moment: "This is cyborg mode. The first output failed. What's the error message? Feed it back to the AI."
Teaching Point: Error Recovery
Highlight this moment. "Notice how we didn't start over. We gave the AI the error message and context. This is how you work in cyborg mode: fast iteration, immediate feedback, continuous refinement."
Phase 3: Dashboard Visualization (Centaur Mode — 20 minutes)
Why Centaur: The dashboard is the end product. Leadership will make decisions based on this output. Accuracy is critical. Slow down and verify every number.
Verification Checkpoint
Stop here. Have students manually calculate readiness for one company using a calculator. Does the DAX formula produce the same number? If not, the formula is wrong. This is centaur mode: verify before proceeding.
Final Verification
Before students present their dashboards, run this verification protocol: "Pick one Marine from the 'Not Ready' table. Manually trace their data back through the source files. Are they actually missing training? Is their equipment actually NMC? If the dashboard says they're not ready, can you prove it from the source data?" This is the QA step. If students can't verify their output, the dashboard isn't ready.
Debrief Questions (10 minutes)
- Where did you switch modes? Why?
- Where did the AI fail? Was it a frontier issue or a context issue?
- If you had to build this again, what would you do differently?
- How much time did this take? How long would it have taken without AI?
Key Takeaway
This build required 10 prompts, 3 mode switches, and at least 2 error recovery cycles. This is normal for complex builds. The students who succeeded were the ones who verified at each phase boundary, recognized when AI output was wrong, and knew how to feed errors back into the conversation for refinement.
Module 3: Group Debugging — Real Problems
Duration: 40 minutes
Participants bring actual tools with actual problems. This module surfaces common failure patterns and builds diagnostic skills. If students don't have broken tools, use the pre-built scenarios below.
Debugging Clinic Protocol
Use this structured protocol for each debugging session:
- Student Presentation (2 minutes): Student explains what the tool should do, what it actually does, and what they've already tried. Use the format: "Expected behavior... Actual behavior... Steps I've taken..."
- Group Diagnosis (3 minutes): Group asks clarifying questions and proposes hypotheses. Instructor guides with questions: "Is this a data issue or a logic issue? Is the problem at the input stage or the output stage? Have we seen this pattern before?"
- Instructor Synthesis (2 minutes): Instructor identifies the root cause category (frontier limitation, insufficient context, incorrect assumption, integration failure, data quality issue) and explains how to approach the fix.
- Document the Pattern: Add the failure case to the collective frontier map.
Time allocation: 7 minutes per problem. Aim for 5 problems in 35 minutes, leaving 5 minutes for final synthesis.
Instructor Note: Managing the Session
Keep time strictly. Students will want to fully fix every problem. The goal is not to fix everything; the goal is to build diagnostic patterns. After 3 minutes of group diagnosis, move to synthesis even if the problem isn't solved. Students can continue debugging after class. The value is in recognizing the pattern.
Pre-Built Debugging Scenarios (If Needed)
If students don't bring broken tools, use these three scenarios. Each scenario includes a problem description, expected diagnosis, and solution approach.
Scenario 1: Power Automate Flow Triggers But Sends Wrong Data
Situation: A Power Automate flow is supposed to trigger when a SharePoint list item is updated, then send an email notification to the item creator with the updated information. The flow triggers correctly, but the email always contains the OLD data, not the updated data.
Student's Attempted Fixes: "I rebuilt the email body three times. I checked the SharePoint permissions. I re-authenticated the connection. Nothing works."
Answer Key: Root Cause & Solution
Root Cause: This is a classic Power Automate timing issue. The trigger "When an item is created or modified" fires immediately when the update is detected, but it captures the item data at the moment of detection, which may be before all fields have finished saving. This is especially common with calculated columns or cascading updates.
Diagnosis Questions to Ask:
- Does the email contain the old data or blank data? (Old data suggests timing issue; blank suggests permissions issue)
- Are any of the fields calculated or lookup columns? (Calculated columns update asynchronously)
- Does the flow work correctly if you manually trigger it 30 seconds after updating? (Confirms timing issue)
Solution Approach: Add a "Delay" action of 5-10 seconds immediately after the trigger, then use "Get item" to retrieve the updated data explicitly rather than relying on the trigger output. The corrected flow structure: Trigger > Delay (5 seconds) > Get item (by ID) > Send email (using Get item output).
Frontier Classification: This is NOT a frontier issue. This is a platform limitation (Power Automate's trigger timing) that requires domain knowledge to diagnose. AI can generate the flow, but it won't know about this timing quirk unless you tell it.
Scenario 2: Power App Form Saves But Doesn't Validate
Situation: A Power Apps form is supposed to validate that a phone number is in the format XXX-XXX-XXXX before saving to a SharePoint list. The form has a text input field with a validation formula, and the submit button should be disabled if the phone number is invalid. The form saves successfully, but it accepts phone numbers in any format, even completely invalid entries like "abc123".
Student's Attempted Fixes: "I asked the AI to write a validation formula three times. I tested different regex patterns. The formula shows no errors in Power Apps, but it doesn't actually prevent invalid data from being submitted."
Answer Key: Root Cause & Solution
Root Cause: The validation formula is applied to the text input field, but the submit button's DisplayMode property isn't checking the validation state correctly. Common error: the submit button checks if the field is blank (IsBlank) but doesn't check if the field is valid. The form allows submission because the button logic only checks for presence of data, not correctness of data.
Diagnosis Questions to Ask:
- What is the exact formula in the text input's Validation property? (Check if the formula is correct)
- What is the DisplayMode property of the submit button? (This is where the actual problem usually is)
- Does the text input show a red error indicator when you type an invalid number? (If yes, validation works but button logic is wrong; if no, validation formula is wrong)
Solution Approach: Set the submit button's DisplayMode to: If(TextInput_Phone.Valid, DisplayMode.Edit, DisplayMode.Disabled). This checks the Valid property of the input field, which reflects the validation formula result. The validation formula itself should be: IsMatch(TextInput_Phone.Text, "^\d{3}-\d{3}-\d{4}$")
Frontier Classification: This is a context issue. AI generated a validation formula, but the student didn't specify that the button's DisplayMode must respect the validation state. This is a common prompting gap: students ask for "validation" but don't specify all the integration points where validation must be enforced.
Scenario 3: Dashboard Shows Stale Data
Situation: A Power BI dashboard pulls from an Excel file stored in SharePoint. When the Excel file is updated, the dashboard doesn't show the new data until the student manually clicks "Refresh" in Power BI Desktop and republishes the report. The student wants the dashboard to automatically update when the source file changes.
Student's Attempted Fixes: "I set up a scheduled refresh in the Power BI service. I checked the data source credentials. I even rebuilt the data connection. The refresh runs successfully according to the logs, but the dashboard still shows old data unless I manually republish from Power BI Desktop."
Answer Key: Root Cause & Solution
Root Cause: This is a data connection configuration issue. When the Power BI report was created, the data source was set to the local file path (e.g., C:\Users\...\file.xlsx) instead of the SharePoint URL. The scheduled refresh in Power BI Service is trying to refresh from the local path, which doesn't exist in the cloud. The refresh "succeeds" but retrieves no new data because it's looking in the wrong place.
Diagnosis Questions to Ask:
- When you created the data connection, did you use "Get Data > SharePoint Folder" or did you download the Excel file and use "Get Data > Excel"? (If the latter, this is the problem)
- In Power BI Desktop, go to Transform Data > Data source settings. What does the file path show? (If it shows a C:\ path instead of a SharePoint URL, this confirms the diagnosis)
- In the Power BI Service refresh history, does it show any warnings or errors, or does it show "Completed successfully"? (This scenario usually shows "Completed successfully" with zero rows refreshed)
Solution Approach: Rebuild the data connection using the SharePoint connector. In Power BI Desktop: Get Data > SharePoint Folder > Enter the SharePoint site URL > Navigate to the folder containing the Excel file > Filter to your specific file > Load. Then configure scheduled refresh in Power BI Service using the SharePoint Online credentials. This establishes a cloud-to-cloud connection that can refresh automatically.
Frontier Classification: This is a domain knowledge issue, not a frontier issue. AI can generate the dashboard, but it can't know whether you connected to a local file or a SharePoint URL unless you explicitly describe your connection method. This is a common problem with AI-assisted builds: the AI assumes the "standard" approach, but doesn't know the nuances of your environment.
Final Synthesis (5 minutes)
After all debugging sessions, lead a group discussion:
- What patterns did we see? (Common answer: most problems were context issues, not frontier issues)
- How many problems were caused by insufficient prompting versus actual AI limitations?
- What questions should we ask the AI differently next time?
- Which problems belong on the frontier map?
Key Takeaway
Most debugging comes down to three categories: (1) the AI didn't have enough context, (2) the platform has a quirk the AI doesn't know about, or (3) we hit an actual frontier limitation. Categories 1 and 2 can be solved with better prompting and domain knowledge. Category 3 goes on the frontier map.
Module 4: Verification Protocols and QA
Duration: 30 minutes
Build a QA checklist for your domain. For AI-generated output, what must be checked?
QA Checklist
- Source verification — AI fabricates references. Every citation, regulation number, and URL must be independently verified.
- Data accuracy — Numbers, dates, names, and quantities must be checked against source data.
- Logic check — Does the reasoning hold? Are conclusions supported by the premises?
- Format compliance — Does the output match required formats, templates, and standards?
- Domain review — Does this pass the smell test for someone who knows this domain?
Exercise: Timed QA Review
Take an AI-generated document. Run it through your checklist. Time yourself. Compare QA time versus creation-from-scratch time. Usually 30–50% — that's the time savings.
AI-Generated SOP Excerpt — QA Timed Exercise
Instructions: Time yourself. How long does it take you to find all five issues? Typical completion time: 5–10 minutes.
STANDARD OPERATING PROCEDURE
Marine Corps Detachment, 99th Training Group
Subject: Unit Check-In / Check-Out Procedure
Reference: (a) MCO 1000.6B, Individual Records Administration
(b) NAVMC 11800/4 (Rev 03-2025), Check-In/Check-Out Sheet
1. Purpose. To establish standardized procedures for all personnel checking in to and checking out of Marine Corps Detachment, 99th Training Group. All personnel shall complete check-in within 72 hours of reporting aboard.
2. Scope. This SOP applies to all Marines, Sailors, and civilian personnel assigned to or transferring from the Detachment.
3. Procedure — Check-In. Personnel reporting aboard shall complete the following steps in order:
Step 1: Report to the Officer of the Day (OOD) with original orders and ten
copies of PCS orders.
Step 2: Obtain a check-in sheet per reference (b).
Step 3: Receive unit orientation brief from S-1 covering unit organization,
key personnel, and local policies.
Step 4: Report to assigned section SNCOIC/OIC for introduction and initial
task assignment.
5. Report to S-1 for initial in-processing, including service record book
review and page 11 entry.
6. Complete remaining check-in sheet signatures (S-3, S-4, Medical, Dental,
IPAC) within 48 hours of reporting.
4. Procedure — Check-Out. Personnel transferring from the unit shall initiate check-out procedures no later than 10 working days prior to the date of detachment.
Answer Key — Five Planted Errors
- Fabricated Reference #1: “MCO 1000.6B” is cited as the governing order for check-in procedures. This MCO does not exist. AI-generated regulation numbers must always be independently verified against the official Marine Corps Publications System.
- Fabricated Reference #2: “NAVMC 11800/4 (Rev 03-2025)” is cited as the check-in/check-out form. This form number is fabricated. AI frequently generates plausible-sounding form numbers that do not correspond to real NAVMC forms.
- Data Accuracy Error — Contradictory Timelines: Paragraph 1 states check-in must be completed “within 72 hours of reporting aboard,” but Step 6 states remaining signatures must be completed “within 48 hours of reporting.” These timelines contradict each other. AI often introduces subtle inconsistencies between sections of longer documents.
- Logic Error — Steps Out of Order: Step 3 has the Marine receiving a “unit orientation brief from S-1,” but Step 5 has the Marine reporting “to S-1 for initial in-processing.” Logically, you would in-process at S-1 (Step 5) before receiving the orientation brief (Step 3). The S-1 steps are reversed.
- Format Error — Inconsistent Numbering: Steps 1 through 4 use the “Step 1:” format, but the procedure then switches to a bare “5.” and “6.” format midway through. AI frequently loses formatting consistency in longer documents, especially when generating numbered procedures.
Key Teaching Point
The GDPval study found human experts averaged 7 hours per task. AI-assisted with review was 1.4x faster, 1.6x cheaper. The review step is where value is created.
Module 5: Teaching Others — The 201 Multiplier
Duration: 30 minutes
The Permission Gap
Mollick's research shows workers already using AI but hiding it. Worried about organizational reaction. This creates a shadow AI culture where best practices aren't shared. Your role as an Advanced Workshop graduate is to formalize AI use, share techniques, and train others.
Discussion: The Apprentice Problem (5 minutes)
Entry-level job postings dropped 35% from 2023 to 2025. AI is automating the routine tasks that juniors traditionally learned on. If a junior never manually writes a key document because AI generates it, how do they develop the judgment to know when the AI-generated version is wrong?
Protocol for Junior Marines Using AI
- Require review and explanation — Juniors must review AI output and explain WHY it is correct or incorrect.
- Periodically work without AI — Key tasks should periodically be done from scratch to build foundational skills.
- Use AI output as a teaching tool — Give juniors AI-generated products and have them find the problems.
- Rotate QA review — Assign juniors to the QA review step so they develop quality judgment through repeated exposure.
Structured Teach-Back Exercise (20 minutes)
This exercise develops your ability to teach EDD concepts to others. Each student will prepare and deliver a 3-minute teaching segment on one concept from the EDD curriculum.
Step 1: Select a Concept (2 minutes)
Choose one of the following concepts from the curriculum:
- Centaur vs. Cyborg modes
- Frontier mapping
- Context-building in prompts
- Iterative refinement
- Verification protocols
- The Jagged Frontier
Step 2: Preparation Using Template (5 minutes)
Use this template to prepare your teaching segment:
Teach-Back Preparation Template
Concept: [Which concept are you teaching?]
One-Sentence Definition: [Define the concept in one clear sentence]
Why It Matters: [One sentence explaining why this concept is important]
Real Example from Your Work: [Specific example from your actual job where this concept applies]
Common Mistake: [One mistake people make when applying this concept]
Key Takeaway: [One sentence the audience should remember]
Instructor Note: Template Modeling
Before students prepare, model the template yourself with a completed example. Show them what "good" looks like. Example: "I'm teaching Centaur vs. Cyborg. Definition: Centaur mode means human does some tasks and AI does others separately; Cyborg mode means human and AI work together iteratively on the same task. Why it matters: Using the wrong mode causes either wasted time or quality failures. My example: When I built a training schedule, I used Cyborg mode for the draft but switched to Centaur mode for the final verification because accuracy was critical. Common mistake: People stay in Cyborg mode for accuracy-critical work and don't slow down to verify. Key takeaway: Match the mode to the risk level of the task."
Step 3: Small Group Teaching (10 minutes)
Break into groups of 3-4. Each person delivers their 3-minute teaching segment. Rotate until everyone has taught.
Step 4: Peer Evaluation (3 minutes)
After each teaching segment, group members provide feedback using this rubric:
| Criteria | Strong | Needs Improvement |
|---|---|---|
| Clarity of Definition | I could explain this concept to someone else now | I'm still unclear on what this concept means |
| Relevance of Example | The example made the concept concrete and believable | The example was generic or didn't clearly illustrate the concept |
| Practical Takeaway | I know what to do differently because of this teaching | I understand the concept but don't know how to apply it |
Facilitating the Exercise
Walk the room during small group teaching. Listen for common issues: (1) Students who read from their template instead of teaching conversationally, (2) Examples that are too vague ("I used this on a project" instead of specific details), (3) Students who exceed 3 minutes (this is a teaching skill: brevity). Give real-time coaching. The goal is not perfect teaching; the goal is building awareness of what effective teaching looks like.
Group Debrief (5 minutes)
Reconvene as a full group. Discuss:
- What made a teaching segment effective?
- What was difficult about teaching something you know well?
- How would you adapt this approach to teach a 30-minute Platform Training session?
- Who in your unit could benefit from learning these concepts?
Module 6: Workflow Playbook (20 minutes)
Each participant produces a one-page playbook for one AI-integrated workflow from their actual job. This is the final deliverable of the Advanced Workshop.
Completed Example: Weekly Training Schedule Publication
Use this filled-in example to show participants what a finished playbook looks like before they create their own.
| Field | Content |
|---|---|
| Task | Weekly training schedule publication for the section |
| Frequency | Weekly — every Thursday by 1600 |
| Mode | Cyborg (continuous back-and-forth refinement) |
| Step 1 | Human: Pull next week’s events from training calendar, OPORD, and any new taskings — Human Only |
| Step 2 | AI: Draft the schedule in standard weekly format with times, locations, and uniform requirements — AI generates, Human reviews |
| Step 3 | Human: Cross-reference against range bookings, vehicle requests, and instructor availability — Human Only |
| Step 4 | AI: Format conflicts as a decision matrix: “Event A conflicts with Event B at 0900. Options: move A to 1300, move B to Tuesday, or split the section.” — AI generates options, Human decides |
| Step 5 | Human: Make final decisions on conflicts, add section leader notes — Human Only |
| Step 6 | AI: Generate the final formatted schedule with all corrections applied, ready for distribution — AI generates, Human approves |
| Verification Checklist | All events have confirmed locations. All times are in 24-hour format. No double-bookings remain. Uniform for each event is specified. POC listed for each event. |
| Known Frontier Issues | AI sometimes invents room numbers that don’t exist on base. AI cannot verify range availability — must be checked manually. AI occasionally uses 12-hour time format even when told to use 24-hour. |
| Time Savings | Without AI: ~3 hours (gathering info, formatting, resolving conflicts manually). With AI: ~45 minutes (human gathers info, AI formats and generates options). |
| Junior Development | Rotate schedule duty among junior Marines weekly. Require the Marine to review AI output and brief back why each event is scheduled (builds planning judgment). Monthly: have one schedule created entirely without AI assistance to maintain baseline skill. |
Blank Template — Create Your Own
| Field | Content |
|---|---|
| Task | A specific, recurring task from your job |
| Frequency | How often you perform this task |
| Mode | Centaur or Cyborg |
| Steps | Step-by-step process with Human/AI labels for each step |
| Verification Checklist | What must be checked before output is final |
| Known Frontier Issues | Where AI has failed on this task before |
| Time Savings Estimate | Time without AI vs. time with AI |
| Junior Development Note | How this workflow preserves skill-building for junior personnel |
Completion Criteria: What a Finished Playbook Looks Like
A completed playbook entry should have: a clear task name, realistic frequency, the correct mode (Centaur or Cyborg), 4-8 concrete steps with Human/AI labels, a verification checklist with 3-5 items, at least one known frontier issue, and a specific time savings estimate. If your playbook has fewer than 4 steps or no verification checklist, it is not detailed enough.
Assessment Rubric
Use this rubric to evaluate student performance across all modules. Students should achieve "Meets" or higher in at least 5 of 6 categories to be considered Advanced Workshop graduates.
| Criteria | Exceeds Expectations | Meets Expectations | Developing |
|---|---|---|---|
| Frontier Map Completeness (Module 1) |
Frontier map covers 5+ categories with specific examples of what AI handles, what it fails at, and what's changing. Map includes evidence from student's own testing. Moving frontier items include planned re-test dates. | Frontier map covers 3-4 categories with clear boundaries. Examples are specific to the student's domain. Distinguishes between "inside" and "outside" frontier accurately. | Frontier map is generic or vague. Categories are not specific to student's domain. Does not clearly identify frontier boundaries or relies on assumptions rather than testing. |
| Complex Build Quality (Module 2) |
Successfully completed Unit Readiness Dashboard with all three data sources integrated. Made conscious mode-switching decisions and articulated why each phase required that mode. Verified outputs at phase boundaries. Dashboard numbers are accurate when spot-checked against source data. | Completed most of the dashboard build. Made at least 2 mode switches with clear rationale. Attempted verification even if errors were found. Understands the difference between centaur and cyborg modes in practice. | Did not complete the build OR stayed in one mode throughout OR could not articulate why mode-switching matters. Did not verify outputs. Dashboard contains obvious errors not caught in QA. |
| Debugging Contribution (Module 3) |
Actively diagnosed problems during group debugging. Asked clarifying questions that narrowed down root cause. Correctly identified whether failures were frontier issues, context issues, or platform quirks. Contributed patterns to the collective frontier map. | Participated in group debugging. Asked questions and proposed solutions. Could distinguish between AI limitations and prompting issues when guided by instructor. Documented at least one failure pattern. | Did not actively participate in debugging OR could not identify root causes OR attributed all problems to "AI limitations" without deeper analysis. Did not document patterns. |
| QA Protocol Rigor (Module 4) |
Identified all 5 errors in the QA exercise in under 10 minutes. Explained why each error is dangerous. Applied the QA checklist to own work and found at least one issue. QA process is systematic and repeatable. | Identified 3-4 errors in the QA exercise. Understands the importance of verification. Can articulate what must be checked in AI-generated output for their domain. | Identified fewer than 3 errors OR took longer than 15 minutes OR did not apply QA thinking to own work. Treats QA as optional rather than critical. |
| Teaching Effectiveness (Module 5) |
Delivered a clear, concise teach-back with a specific real-world example. Stayed within 3 minutes. Explained not just what the concept is, but why it matters and how to apply it. Received "Strong" ratings on all three rubric criteria from peers. | Delivered a teach-back that communicated the concept clearly. Used a real example. Stayed close to time limit. Received "Strong" on at least 2 of 3 rubric criteria from peers. | Teach-back was unclear, too long, or relied on generic examples. Could not explain how to apply the concept. Did not receive "Strong" on any rubric criteria from peers. |
| Workflow Playbook Completeness (Module 6) |
Workflow playbook covers a real, recurring task with specific step-by-step details. Each step is labeled Human/AI with rationale. Verification checklist is thorough and testable. Known frontier issues are documented from actual experience. Time savings estimate is backed by real data. Junior development protocol is specific and actionable. | Workflow playbook covers a real task. Steps are clear. Verification checklist is present. Frontier issues are identified. Time savings estimate is reasonable. Junior development note is included. | Workflow playbook is incomplete or generic. Steps are vague. Verification checklist is missing or not specific. No documented frontier issues or time savings. No consideration of junior development. |
Using This Rubric
This rubric is not a checklist. Use it as a guide for observing student performance throughout the workshop. The most important indicator of success is whether students demonstrate conscious decision-making about when and how to use AI. A student who completes all deliverables but cannot articulate their reasoning has not achieved the learning objectives. Conversely, a student who struggles with technical execution but shows strong diagnostic thinking and mode-switching awareness is on the right track.
Certification Recommendation
Students who achieve "Meets" or higher in at least 5 of 6 categories are recommended for:
- Serving as Platform Training instructors
- Leading tool development projects in their units
- Mentoring junior personnel in AI-assisted workflows
- Contributing to frontier map updates and workflow documentation
Students who do not meet this threshold should be encouraged to continue practicing, revisit specific modules, and attempt Advanced Workshop again after building 1-2 additional tools.