Overview & Learning Objectives
This workshop takes you beyond building individual tools and into the practices that make an entire section more capable. By the end of this course, you will be able to:
- Map the frontier for your specific domain
- Build verification protocols for AI-generated output
- Practice debugging complex systems
- Learn to teach the 201 skills to others
Before You Start
You should have at least one deployed tool before starting this course. The exercises assume hands-on experience building and maintaining an AI-assisted workflow or application. If you have not yet completed a build, start with Builder Orientation and Platform Training first.
Prerequisite Check: Are you ready for this course?
This course assumes you have:
- Completed AI Fluency Fundamentals, Builder Orientation, and Platform Training
- Built and deployed at least one tool that other people are using
- Encountered and resolved at least one AI failure case during a build
If you have not deployed a tool yet, complete Platform Training first. This course builds on that experience.
Module 1: Frontier Mapping for Your Domain
Duration: 45 minutes
The frontier is the boundary between what AI handles well and where it fails. Your job in this module is to create a detailed map of that boundary for your specific role and domain. This is the most important artifact you will produce in this course.
Example: Filled Frontier Map (1st Bn, 99th Marines)
Study this completed map to understand the level of specificity expected. Each row covers a category of work, and each column captures where AI sits today relative to that category.
| Category | Inside Frontier (AI Handles) | Outside Frontier (AI Fails) | Moving Frontier (Check Periodically) |
|---|---|---|---|
| Document generation | Correspondence drafts, counseling statement templates, award write-ups, standard memo formatting | Documents requiring specific institutional knowledge (unit SOPs, local policy interpretation), anything needing exact regulation quotes | Fitness report narratives (improving rapidly), legal review summaries |
| Data analysis | Trend identification in structured data, summarizing large datasets, creating charts/dashboards, anomaly flagging | Interpreting data in operational context (why readiness dropped), cross-referencing classified and unclassified sources | Predictive analysis (retention modeling, maintenance forecasting) |
| Process automation | Simple approval routing, email notifications, calendar scheduling, status tracking | Multi-system integrations with legacy databases, processes requiring human judgment calls (hardship determinations) | Complex conditional workflows (getting better with clear business rules) |
| Reference lookup | Finding relevant MCOs/NAVMCs, summarizing policy documents, comparing regulation versions | Interpreting how regulations apply to specific edge cases, resolving conflicting guidance between orders | Policy applicability questions (models improving but still unreliable for authoritative interpretation) |
| Training development | Lesson plan outlines, quiz/assessment generation, scenario creation, slide deck structure | Evaluating training effectiveness, adapting content for specific MOS requirements, determining doctrinal accuracy | Full lesson plan generation with appropriate examples (quality varies significantly) |
Why This Map Matters
This map is the most valuable artifact you will create. The BCG-Harvard study found that workers who applied AI beyond the frontier without knowing it performed 19 percentage points worse than those without AI. Knowing where the boundary is prevents you from trusting AI in places where it will fail silently.
Blank Frontier Map Template
Use this template to build your own frontier map. Replace the placeholder rows with the actual categories of work in your role.
Your Frontier Map
| Category | Inside Frontier | Outside Frontier | Moving Frontier |
|---|---|---|---|
| (your domain area 1) | |||
| (your domain area 2) | |||
| (your domain area 3) | |||
| (your domain area 4) | |||
| (your domain area 5) |
Completion Criteria
Complete all 5 category rows. Each cell should contain a specific, concrete example from your domain — not generic descriptions. The “Moving Frontier” column should include tasks where AI capability is improving and you should re-evaluate quarterly.
Exercise: Fill In Your Frontier Map
Using the blank template above, fill in your frontier map for your specific role and domain. Be as specific as possible — generic entries like “writing” are less useful than “counseling statement drafts” or “award narratives.”
Self-Check: Is Your Map Complete?
Does every row have at least one entry in each column? If the “Outside Frontier” column is empty, you may be overestimating AI capabilities. Every domain has tasks where AI fails — identifying those tasks is the entire point of this exercise.
Module 2: Complex Build — Multi-Component System
Duration: 60 minutes
In this module you will build a system that requires switching between centaur and cyborg modes across multiple phases. The key skill is conscious, deliberate mode-switching — choosing the right work pattern for each component based on what that component demands.
Example: Tutoring Management System (1st Bn, 99th Marines)
This system tracks tutoring sessions for the Language and Culture Program. It has three components, each requiring a different work mode:
- Data backend — Centaur mode (accuracy-critical; schema design must be correct before anything else is built)
- Input interface — Cyborg mode (iterative, fluid collaboration with rapid back-and-forth)
- Automated reporting dashboard — Centaur mode (accuracy-critical; numbers must be right)
Phase 1: Data Backend (Centaur Mode)
Why centaur mode: The data model is the foundation. If the schema is wrong, every component built on top of it will inherit the error. You define the requirements precisely, AI generates the design, and you verify before moving on.
Prompt — Data Backend
Design a SharePoint data model for a tutoring management system. Tables needed: Students (name, rank, unit, language, proficiency level), Tutors (name, rank, languages, availability), Sessions (student, tutor, date, time, duration, topic, notes), and Reports (student, period, hours completed, progress notes). Define the relationships between tables and required columns.
Verification at the phase boundary:
- Does the output match your design requirements?
- Are the data types correct for each column?
- Do the relationships between tables make sense?
- Are there any missing columns you will need later?
Self-Check: Phase 1
Before moving to the next phase, verify: Does the output match your design? Are the data types correct? Do the relationships make sense? If you proceed with a flawed schema, you will have to redo the input interface and dashboard as well.
Phase 2: Input Interface (Cyborg Mode)
Why cyborg mode: The input form is user-facing and benefits from rapid iteration. You work alongside AI in a fluid back-and-forth, adjusting layout, field order, and validation rules as you go.
Prompt — Input Interface
Build a Power Apps form for logging tutoring sessions. It should: pull student and tutor names from the existing lists, auto-fill the date and time, have a dropdown for topic area, and a notes field. Make it mobile-friendly so tutors can log sessions from their phones.
Verification at the phase boundary:
- Does the form pull from the correct data sources?
- Do the dropdowns reflect the actual lists from Phase 1?
- Does the auto-fill logic work correctly?
- Is the layout usable on a mobile device?
Self-Check: Phase 2
Before moving to the next phase, verify: Does the output match your design? Are the data connections pointing to the right tables? Does the form write data in the format the reporting dashboard will need?
Phase 3: Automated Reporting Dashboard (Centaur Mode)
Why centaur mode: Dashboard numbers will be briefed to leadership. Incorrect metrics erode trust in the entire system. You specify exactly what metrics are needed, AI generates the dashboard, and you verify every number against source data.
Prompt — Dashboard
Create a Power BI dashboard that shows: total tutoring hours this month by student, tutor utilization rates, most common topic areas, and students who haven't attended sessions in 2+ weeks. Include a filter for date range and language.
Verification at the phase boundary:
- Do the totals match a manual count of the source data?
- Are the utilization rate calculations correct?
- Does the “2+ weeks absent” filter use the right date logic?
- Do filters work as expected?
Self-Check: Phase 3
Before calling this system complete, verify: Does the output match your design? Are the data types correct? Do the relationships make sense? Spot-check at least three data points manually against the source lists.
Final Verification: Is your system working end-to-end?
Before moving to Module 3, test the full workflow:
- Submit a test tutoring session through the input form
- Verify it appears in the data backend with correct fields
- Check the dashboard — do the totals update correctly?
- Test all filters and date ranges on the dashboard
- Verify student hours match between the input data and the report
If anything fails, return to the phase where that component was built and debug before proceeding.
Module 3: Verification Protocols and QA
Duration: 45 minutes
Every AI-generated output needs a verification step. This module gives you a structured QA checklist and a timed exercise to practice using it.
QA Checklist
- Source verification — AI fabricates references. Every citation, regulation number, and URL must be independently verified.
- Data accuracy — Numbers, dates, names, and quantities must be checked against source data.
- Logic check — Does the reasoning hold? Are conclusions supported by the premises?
- Format compliance — Does the output match required formats, templates, and standards?
- Domain review — Does this pass the smell test for someone who knows this domain?
Exercise: Timed QA Review
The following document was generated by AI. It contains five planted errors. Time yourself. Use the QA checklist above to find all five issues in this document.
AI-Generated SOP Excerpt — QA Timed Exercise
Instructions: Time yourself. Use the QA checklist above to find all five planted issues in this document.
STANDARD OPERATING PROCEDURE
Marine Corps Detachment, 99th Training Group
Subject: Unit Check-In / Check-Out Procedure
Reference: (a) MCO 1000.6B, Individual Records Administration
(b) NAVMC 11800/4 (Rev 03-2025), Check-In/Check-Out Sheet
1. Purpose. To establish standardized procedures for all personnel checking in to and checking out of Marine Corps Detachment, 99th Training Group. All personnel shall complete check-in within 72 hours of reporting aboard.
2. Scope. This SOP applies to all Marines, Sailors, and civilian personnel assigned to or transferring from the Detachment.
3. Procedure — Check-In. Personnel reporting aboard shall complete the following steps in order:
Step 1: Report to the Officer of the Day (OOD) with original orders and ten
copies of PCS orders.
Step 2: Obtain a check-in sheet per reference (b).
Step 3: Receive unit orientation brief from S-1 covering unit organization,
key personnel, and local policies.
Step 4: Report to assigned section SNCOIC/OIC for introduction and initial
task assignment.
5. Report to S-1 for initial in-processing, including service record book
review and page 11 entry.
6. Complete remaining check-in sheet signatures (S-3, S-4, Medical, Dental,
IPAC) within 48 hours of reporting.
4. Procedure — Check-Out. Personnel transferring from the unit shall initiate check-out procedures no later than 10 working days prior to the date of detachment.
Answer Key — Five Planted Errors
- Fabricated Reference #1: “MCO 1000.6B” is cited as the governing order for check-in procedures. This MCO does not exist. AI-generated regulation numbers must always be independently verified against the official Marine Corps Publications System.
- Fabricated Reference #2: “NAVMC 11800/4 (Rev 03-2025)” is cited as the check-in/check-out form. This form number is fabricated. AI frequently generates plausible-sounding form numbers that do not correspond to real NAVMC forms.
- Data Accuracy Error — Contradictory Timelines: Paragraph 1 states check-in must be completed “within 72 hours of reporting aboard,” but Step 6 states remaining signatures must be completed “within 48 hours of reporting.” These timelines contradict each other. AI often introduces subtle inconsistencies between sections of longer documents.
- Logic Error — Steps Out of Order: Step 3 has the Marine receiving a “unit orientation brief from S-1,” but Step 5 has the Marine reporting “to S-1 for initial in-processing.” Logically, you would in-process at S-1 (Step 5) before receiving the orientation brief (Step 3). The S-1 steps are reversed.
- Format Error — Inconsistent Numbering: Steps 1 through 4 use the “Step 1:” format, but the procedure then switches to a bare “5.” and “6.” format midway through. AI frequently loses formatting consistency in longer documents, especially when generating numbered procedures.
Self-Check: How Long Did It Take?
How long did it take you to find all five errors? Now compare that to how long it would take to write this SOP from scratch. The difference is your time savings — and it demonstrates why the QA step is where value is created. AI generates the draft in seconds; your expertise catches what it gets wrong.
What This Exercise Teaches
Which error did you find first? Most people catch the formatting error (inconsistent numbering) quickly but miss the fabricated references unless they explicitly verify every citation. This is why Source Verification is item #1 on the QA checklist — it is the easiest step to skip and the most dangerous when you do.
Module 4: Debugging Practice
Duration: 45 minutes
Debugging is a core skill for anyone building with AI. In this module you will work through three scenarios individually, diagnosing root causes and identifying fixes. Each scenario represents a common failure pattern in AI-assisted builds.
Scenario 1: Approval Flow Sends to the Wrong Person
A Power Automate flow is supposed to route purchase requests based on dollar amount. Requests under $2,500 go to the Department Head for approval. Requests $2,500 and above go to the Commanding Officer.
Symptom
Request for $1,500 office supplies submitted by Cpl Torres. Expected: Routed to Department Head (Maj Williams). Actual: Routed to CO (Col Richardson). Request for $2,500 equipment purchase submitted by SSgt Park. Expected: Routed to CO (Col Richardson). Actual: Routed to CO (Col Richardson). [Correct] All requests at $2,500 exactly are handled correctly. Only requests BELOW $2,500 are being sent to the wrong approver.
Try to diagnose this yourself first
Think about what could cause this symptom. What would you check? Write down your hypothesis before reading the diagnosis guide below.
Diagnosis guide:
- What is the condition that controls routing?
- If requests at exactly $2,500 go to the CO correctly, but requests below $2,500 also go to the CO, what comparison operator would cause this?
- What should the condition be instead?
Self-Check: Root Cause and Fix
Root cause: The condition logic uses > (greater than)
instead of >= (greater than or equal to) for the CO threshold, or
equivalently, the Department Head condition uses < instead of
<=. Because the threshold check for “send to Department Head”
is amount < 2500 but was written as amount > 2500 routing
to CO, every amount that is not > 2500 falls through — except the
logic is inverted: the condition likely reads “if amount >= 0, send to CO”
as a catch-all because the < 2500 branch was written as
> 2500.
Fix: Change the condition to: “If amount is greater than or equal to 2500, route to CO. Otherwise, route to Department Head.” Verify by testing with values at $2,499, $2,500, and $2,501.
Scenario 2: Gallery View Shows All Items Instead of Filtered
A Power App has a gallery that should display only overdue items — items whose due date has passed and whose status is not “Complete.”
Symptom
The "Overdue Items" gallery shows ALL items from the task list, including items that are marked "Complete" and items whose due date is in the future. Total items in list: 47 Items actually overdue: 12 Items displayed in gallery: 47
Try to diagnose this yourself first
Think about what could cause this symptom. What would you check? Write down your hypothesis before reading the diagnosis guide below.
Diagnosis guide:
- What filter formula is the gallery using? Check the
Itemsproperty of the gallery. - Is the filter comparing dates correctly? (Common issue: comparing a date value to a text string.)
- Is the status check using the correct column name and value? (Common issue: column display name vs. internal name.)
Self-Check: Root Cause and Fix
Root cause: The filter formula is likely malformed or not applied at all.
Common causes include: (1) the gallery Items property points to the raw data
source without a Filter() function, (2) the date comparison uses a text
string like "Today" instead of the Today() function, or (3) the
status column is referenced by its display name (e.g., “Status”) when Power
Apps requires the internal name (e.g., “OData_Status”).
Fix: Set the gallery Items property to:
Filter(TaskList, DueDate < Today() && Status.Value <> "Complete").
Verify by checking the displayed count against a manual count of overdue, incomplete items.
Scenario 3: Dashboard Numbers Don’t Match Source Data
A Power BI dashboard reports completion percentage for a training tracker. Leadership questions the numbers because they do not match a manual count.
Symptom
Dashboard shows: 85% training completion rate. Manual count shows: 67% training completion rate. Source data: 47 Marines, 12 training events each = 564 total slots. Dashboard appears to be counting something differently than "completed slots / total slots."
Try to diagnose this yourself first
Think about what could cause this symptom. What would you check? Write down your hypothesis before reading the diagnosis guide below.
Diagnosis guide:
- How is the dashboard calculating “completion rate”? Is it counting rows vs. distinct values?
- If a Marine completed 10 of 12 events, does the dashboard count that Marine as “85% complete” or as “1 complete Marine”?
- Are there duplicate rows in the source data inflating the count?
Self-Check: Root Cause and Fix
Root cause: The counting methodology differs between the dashboard and the manual count. The most likely cause is that the dashboard is counting rows (each training completion record) rather than distinct values (unique Marine-event combinations). If some Marines have duplicate completion records (e.g., re-certifications, data entry errors), the row count is inflated. Alternatively, the dashboard may be calculating the average completion rate per Marine (averaging individual percentages) rather than the overall rate (total completed slots divided by total slots), which produces a different number.
Fix: Align the calculation method. Use
DISTINCTCOUNT instead of COUNT to avoid duplicates. Ensure the
formula is: total completed distinct slots / total expected slots. Verify by manually
calculating the rate for a small subset (e.g., one platoon) and comparing.
Key Insight
Every debugging session teaches you something about the frontier. When you find a bug, ask: “Is this a frontier issue (AI cannot do this reliably) or a context issue (I did not give AI enough information)?” Document the answer. Over time, your frontier map becomes more precise and your prompts become more effective.
Module 5: Teaching Others — The 201 Multiplier
Duration: 30 minutes
Individual AI capability is valuable. Organizational AI capability is transformational. This module covers how to spread 201 skills across your section — and how to do it responsibly.
The Permission Gap
Mollick’s research shows workers are already using AI but hiding it. They are worried about organizational reaction — will leadership see it as cheating? Will they get in trouble? This creates a shadow AI culture where best practices are not shared, mistakes are repeated, and no one benefits from anyone else’s learning.
The fix is explicit permission and structure. When leadership endorses AI use through a framework like EDD, people stop hiding and start sharing.
The Apprentice Problem
Entry-level job postings dropped 35% from 2023 to 2025. AI is automating the routine tasks that juniors traditionally learned on. If a junior Marine never manually writes a counseling statement because AI generates it, how do they develop the judgment to know when the AI-generated version is wrong?
This is not a reason to stop using AI. It is a reason to be deliberate about how juniors interact with it.
Protocol for Junior Marines Using AI
- Require review and explanation — Juniors must review AI output and explain WHY it is correct or incorrect. The explanation builds the judgment that unreviewed use would skip.
- Periodically work without AI — Key tasks should periodically be done from scratch to build foundational skills. A Marine who has never written a memo manually cannot evaluate an AI-generated memo.
- Use AI output as a teaching tool — Give juniors AI-generated products and have them find the problems. This builds critical evaluation skills faster than starting from blank.
- Rotate QA review — Assign juniors to the QA review step so they develop quality judgment through repeated exposure to both good and flawed output.
Exercise: The 60-Second Teach
Pick one 201 skill you have used in your own work. Write a 60-second explanation that someone new to AI could follow. Use your own real example — not a hypothetical.
Working alone? Record your 60-second teach as a voice memo or short video. Teaching to a camera forces you to clarify your thinking the same way teaching a person does. If you can explain a 201 skill clearly in 60 seconds using your own example, you understand it.
60-Second Teach Completion Criteria
Each field should be 1–3 sentences. The “Common Mistake” field is the most valuable — think about what you got wrong when you first learned this skill, or what you have seen others get wrong.
60-Second Teach Template
| Field | Your Content |
|---|---|
| Skill | (which 201 skill are you teaching?) |
| Your Example | (a real task where you used this skill) |
| Key Insight | (the one thing that made the difference) |
| Common Mistake | (what goes wrong when people skip this skill) |
Deliverable: Workflow Playbook
Your final deliverable is a one-page playbook for one AI-integrated workflow from your actual job. Study the completed example below, then create your own using the blank template.
Start simple. Your first playbook does not need to be this detailed. Begin with: Task, Mode, 3–5 steps with Human/AI labels, and one known frontier issue. Add detail as you use the playbook over time and discover what matters.
Completed Example: Weekly Training Schedule Publication
| Field | Content |
|---|---|
| Task | Weekly training schedule publication for the section |
| Frequency | Weekly — every Thursday by 1600 |
| Mode | Cyborg (continuous back-and-forth refinement) |
| Step 1 | Human: Pull next week’s events from training calendar, OPORD, and any new taskings — Human Only |
| Step 2 | AI: Draft the schedule in standard weekly format with times, locations, and uniform requirements — AI generates, Human reviews |
| Step 3 | Human: Cross-reference against range bookings, vehicle requests, and instructor availability — Human Only |
| Step 4 | AI: Format conflicts as a decision matrix: “Event A conflicts with Event B at 0900. Options: move A to 1300, move B to Tuesday, or split the section.” — AI generates options, Human decides |
| Step 5 | Human: Make final decisions on conflicts, add section leader notes — Human Only |
| Step 6 | AI: Generate the final formatted schedule with all corrections applied, ready for distribution — AI generates, Human approves |
| Verification Checklist | All events have confirmed locations. All times are in 24-hour format. No double-bookings remain. Uniform for each event is specified. POC listed for each event. |
| Known Frontier Issues | AI sometimes invents room numbers that don’t exist on base. AI cannot verify range availability — must be checked manually. AI occasionally uses 12-hour time format even when told to use 24-hour. |
| Time Savings | Without AI: ~3 hours (gathering info, formatting, resolving conflicts manually). With AI: ~45 minutes (human gathers info, AI formats and generates options). |
| Junior Development Note | Rotate schedule duty among junior Marines weekly. Require the Marine to review AI output and brief back why each event is scheduled (builds planning judgment). Monthly: have one schedule created entirely without AI assistance to maintain baseline skill. |
Workflow Playbook Completion Criteria
A complete playbook entry includes: 4–8 concrete steps with Human/AI labels, a verification checklist with 3–5 specific items to check, at least one known frontier issue, and a realistic time savings estimate. Compare your entry against the filled example above.
Blank Workflow Playbook Template
Your Workflow Playbook
| Field | Content |
|---|---|
| Task | |
| Frequency | |
| Mode | Centaur or Cyborg |
| Steps | Step-by-step with Human/AI labels |
| Verification Checklist | |
| Known Frontier Issues | |
| Time Savings | Without AI vs. with AI |
| Junior Development Note |
Assignment
- Complete your frontier map (Module 1) with at least five domain categories
- Create your workflow playbook for one recurring task from your actual job
- Teach one 201 skill to a colleague this week using your 60-second teach
Timeline: Budget 4–6 hours outside of this course to complete these deliverables. Aim to finish within one week while the material is fresh.
Capstone Deliverable
Documentation Package
Complete the full documentation package (User Guide, Replication Guide, Adaptation Guide, Maintenance Guide) for your tool. The package must be thorough enough that another developer could rebuild your tool from the Replication Guide alone.
Knowledge Check
Knowledge Check
The BCG-Harvard study found that workers who applied AI beyond the frontier without knowing it performed how much worse than those without AI?
The BCG-Harvard study found a 19 percentage point performance drop when workers applied AI beyond the frontier unknowingly. This is why frontier mapping is the most important artifact in this course -- knowing where the boundary is prevents you from trusting AI in places where it will fail silently.
Your frontier map has an "Outside Frontier" column that is completely empty for every category. What does this indicate?
Every domain has tasks where AI fails. An empty "Outside Frontier" column indicates overestimation of AI capabilities rather than a domain that AI handles perfectly. Identifying those failure points is the entire purpose of the frontier mapping exercise. If you cannot find failures, you may not have tested AI on enough tasks in your domain.
What is the purpose of the "Moving Frontier" column in a frontier map?
The "Moving Frontier" column captures tasks where AI capability is actively improving. For example, fitness report narratives or predictive analysis might be unreliable today but improving rapidly. These items should be re-evaluated quarterly so you can take advantage of new capabilities as they become reliable enough for production use.
In the Tutoring Management System example, the data backend uses centaur mode while the input interface uses cyborg mode. What principle drives this mode selection?
Mode selection is driven by what each component demands, not by a fixed rule about component type. The data backend uses centaur mode because the schema is a foundation -- errors cascade to everything built on top. The input interface uses cyborg mode because it is user-facing and benefits from rapid iteration. The reporting dashboard returns to centaur mode because the numbers must be accurate for leadership briefs.
You completed Phase 1 (data backend) and Phase 2 (input interface) of a multi-component system. In Phase 3 (reporting dashboard), you discover the dashboard totals do not match the input data. What should you do first?
In a multi-component system, discrepancies can originate at any layer. The correct approach is to trace the data flow from input through storage to display to find exactly where the numbers diverge. The error might be in the dashboard calculation, but it could also be in how the input form writes data or how the data backend stores it. Systematic diagnosis prevents you from fixing the wrong component.
Why does the course recommend verifying the data backend schema before building the input interface or dashboard?
The data model is the foundation. If the schema has incorrect column types, missing relationships, or wrong data types, every component built on top of it will inherit those errors. Fixing a schema after building the input form and dashboard means rebuilding those components too. Verifying the foundation before building upward prevents cascading rework.
The QA checklist lists "Source verification" as item #1. According to the timed QA exercise, why is this the most important step?
Source verification is item #1 because it is the easiest step to skip and the most dangerous when you do. AI generates plausible-sounding regulation numbers, form numbers, and citations that do not correspond to real documents. Most people catch formatting errors quickly but miss fabricated references unless they explicitly verify every citation against authoritative sources.
In the timed QA exercise, the AI-generated SOP contains steps that are out of logical order (receiving an S-1 orientation brief before in-processing at S-1). What type of QA check catches this error?
This is a logic check error. The question is whether the reasoning and sequence make sense: logically, you would in-process at S-1 before receiving an orientation brief from S-1. Domain knowledge tells you the steps are reversed. Format compliance would catch the inconsistent numbering (a separate error in the same document), but the logical ordering issue requires understanding the actual workflow.
The timed QA exercise demonstrates that AI can generate a draft SOP in seconds, but finding the five errors takes several minutes of careful review. What does this teach about the value proposition of AI-assisted work?
The exercise demonstrates the core value proposition: AI handles the generation (fast, repetitive work) while humans handle the verification (expert judgment). Even with QA time included, the total time is less than writing from scratch. But the QA step is where the actual value is created -- it is what separates a useful draft from a document with fabricated references and logical errors.
In Debugging Scenario 1, a $1,500 request routes to the CO instead of the Department Head. The key diagnostic question is: "If requests at exactly $2,500 go to the CO correctly, but requests below $2,500 also go to the CO, what comparison operator would cause this?" What is the root cause?
The root cause is inverted or incorrect comparison operators in the condition logic. For example, if the condition reads "if amount >= 0, send to CO" as a catch-all because the Department Head branch was incorrectly written, every request falls through to the CO. The fix is to ensure the condition properly routes amounts below $2,500 to the Department Head and amounts at or above $2,500 to the CO, then verify with test values at $2,499, $2,500, and $2,501.
In Debugging Scenario 2, a gallery view that should show only overdue items displays all 47 items instead of the expected 12. Which of the following is the most systematic first step in diagnosing this issue?
The systematic first step is to inspect the gallery's Items property. If the gallery shows all items, the most likely cause is that the filter is missing, malformed, or referencing incorrect column names. Common AI errors include: pointing to the raw data source without a Filter() function, using a text string "Today" instead of the Today() function, or using a column's display name when Power Apps requires the internal name.
After resolving a debugging scenario, the course suggests asking: "Is this a frontier issue or a context issue?" What is the practical difference between the two?
The distinction is critical for how you respond. A context issue means you can fix the problem by providing better information in your prompt -- the AI is capable, it just lacked the right input. A frontier issue means AI cannot reliably handle this task regardless of how you prompt it, so you need to plan for manual work or extra verification. Documenting which category each bug falls into makes your frontier map more precise and your future prompts more effective.
Mollick's research shows workers are already using AI but hiding it. According to the course, what is the organizational fix for this "shadow AI culture"?
The fix is explicit permission and structure. When leadership endorses AI use through a framework like EDD, people stop hiding and start sharing. This eliminates shadow AI culture where best practices are not shared, mistakes are repeated, and no one benefits from anyone else's learning. Structure transforms individual capability into organizational capability.
The course describes the "apprentice problem": entry-level job postings dropped 35% from 2023 to 2025 as AI automates routine tasks juniors traditionally learned on. Which protocol addresses this?
The protocol is deliberate about how juniors interact with AI: require them to review and explain why AI output is correct or incorrect (builds judgment), periodically complete key tasks without AI (builds foundational skills), use AI output as a teaching tool for finding problems (builds critical evaluation), and rotate QA review duty (builds quality judgment through repeated exposure). The goal is not to restrict AI use but to ensure juniors develop the expertise needed to evaluate AI output.
A complete workflow playbook entry requires several components. Which of the following is the most critical for ensuring the playbook remains useful over time?
While all components matter, known frontier issues are the most critical for long-term usefulness. They encode hard-won experience about where AI fails for this specific workflow, such as "AI sometimes invents room numbers that don't exist on base" or "AI cannot verify range availability." Without this information, future users will repeat the same mistakes. The frontier issues turn individual debugging experience into organizational knowledge.
Course Completion Checklist
Course Completion Checklist
Course Complete!
You have completed Advanced Workshop. View your progress dashboard or generate your certificate.