Agenda
| Time | Module | 201 Skills Applied |
|---|---|---|
| 0:00–0:15 | Setup and Review | All six (quick check) |
| 0:15–0:35 | Whiteboard Planning (Centaur Phase 1) | Task Decomposition, Context Assembly |
| 0:35–1:20 | Build #1: Leave Router (Centaur Phase 2-4) | Context Assembly, Quality Judgment |
| 1:20–1:30 | Break — Failure Sharing | Frontier Recognition |
| 1:30–2:15 | Build #2: Training Tracker (Cyborg Mode) | Iterative Refinement, Workflow Integration |
| 2:15–2:25 | Break | |
| 2:25–3:00 | Build #3: Your Problem (Choose Your Mode) | Task Decomposition, all six |
| 3:00–3:20 | Frontier Map Update and Wrap | Frontier Recognition |
Detailed Timing Breakdown
| Module | Duration | Key Activities |
|---|---|---|
| Module 1: Review & Setup | 15 min | M365 access verification, centaur/cyborg review, failure story sharing |
| Module 2: Whiteboard Exercise | 20 min | Leave router business rules, data flow, edge cases — human design phase |
| Module 3: Build #1 — Leave Router | 45 min | AI build, human verify, AI refine, human accept — full centaur cycle |
| Break / Failure Sharing | 10 min | Document what AI got wrong, how students caught it, frontier map start |
| Module 4: Build #2 — Training Tracker | 45 min | Cyborg mode: continuous iteration, discover requirements as you build |
| Module 5: Build #3 — Your Problem | 35 min | Student chooses mode, applies to real problem, independent work |
| Module 6: Frontier Map & Wrap-Up | 20 min | Compile failures into frontier map, assignment briefing |
| Buffer | 10 min | Absorb overruns, handle access issues, extra questions |
| Total Course Time | 3 hours 20 minutes (200 min instruction + 10 min buffer). Schedule a 4-hour block to absorb overruns. | |
Contingency Plan: If Power Platform Is Unavailable
If M365 access or Power Platform is down or unavailable for any reason, pivot to the following backup plan:
- Use AI chat to generate mockup screenshots: Have students describe their desired workflow to an AI tool like ChatGPT or Gemini (both also on GenAI.mil) and ask it to generate mockup images of the interface. This preserves the prompt engineering practice.
- Paper wireframe exercise: Complete the whiteboard planning phases for all three builds on paper or whiteboard. Focus on business logic, data flow, and edge cases. This preserves the centaur/cyborg conceptual distinction.
- Use pre-built examples for demonstration: If you have screenshots or recordings of completed Power Platform builds from previous sessions, walk through them as case studies. Focus on verification checkpoints and failure recognition.
- Convert to prompt engineering workshop: Have students write the complete prompt sequence they would use to build each tool, then peer review each other's prompts. This maintains the EDD skill development focus.
The core learning outcome is not the Power Platform tool itself, but the centaur/cyborg work patterns and verification discipline. Those can be taught without live platform access if necessary.
Module 1: Setup and Review
Duration: 15 minutes
Pre-Flight Access Check
Before starting Module 2, confirm access to all four platforms and two critical capabilities. Do not proceed without all six — resolving access issues during a build wastes significant time.
- SharePoint Online (*.sharepoint.us) — can create a list
- Power Apps (make.gov.powerapps.us) — can open the maker portal
- Power Automate (high.flow.microsoft.us) — can see "My flows"
- GenAI.mil — can open Agent Designer or start a conversation
- Integrate button: Open any SharePoint list → look for Integrate in the command bar. If you see "Power Apps" and "Power Automate" options, you're good. If the Integrate button is missing, your tenant admin has disabled it — see fallback below.
- Approvals connector: In Power Automate, click + New flow → search connectors for "Approvals." If it appears, you're good. If not, Steps 5-6 of Build #1 (approval workflow) will not work — skip those steps and complete the core build (Steps 1-4) only.
Fallback: If Integrate Button Is Unavailable
If the Integrate button is disabled in your tenant, create apps manually instead: Power Apps → Create → Canvas app from blank → add your SharePoint list as a data connection → insert an EditForm control → connect it to your list. This replaces the auto-generated app. You will need to build the browse screen and detail screen yourself. Adjust the prompts accordingly: instead of "I used Integrate," say "I created a canvas app from blank and added an EditForm connected to my SharePoint list."
Verify M365 access and Power Platform for all participants. Run a quick centaur vs cyborg review to ensure everyone remembers the two modes from Builder Orientation.
Ask 2–3 students to share failure cases from their Builder Orientation assignment. What went wrong? What did the AI get wrong that they had to catch? These stories set the tone for the rest of the session.
Instructor Note
Have everyone log in to each platform before proceeding. Verify access now so you do not lose time during the builds. If anyone has access issues, have IT support contact information ready. Allow 5 extra minutes if needed.
AI Tool Selection for Platform Builds
During hands-on builds, students will use an AI chat tool to generate Power Apps formulas and Power Automate flow definitions. If the classroom has CAC-enabled workstations, use GenAI.mil or CamoGPT (Army-managed) so students can reference CUI-level unit data in their prompts (anonymize PII unless a PIA authorizes it). Fall back to commercial tools for unclassified exercises. See the Approved Tools page for the full list.
Note: Agent Designer (persistent agent configuration) is only available on GenAI.mil. If using CamoGPT, paste the quick-start primer from the Agent Setup Guide at the beginning of each session.
Power Apps Tip: Copy Code from Tree View
To get the current design code out of Power Apps and into your AI conversation: open the Tree View panel, right-click any control or screen, and select Copy. This copies the full component definition. Paste it directly into GenAI.mil or your AI tool — now the AI can see exactly what you have and give you precise modifications instead of generic formulas.
If Your GenAI.mil Session Resets
Your agent configuration (from Agent Designer) is saved permanently — you do not need to re-paste it. But your conversation context resets. Start a new conversation and re-establish context:
"I'm building a Leave Request system. I have a SharePoint list called LeaveRequests with columns: [list them]. I have an auto-generated Power App from Integrate. I need to [describe where you left off]."
You can also paste your current formula from Power Apps (Tree View → right-click → Copy) to give the agent your exact state.
Module 2: Build #1 — Structured Workflow, Centaur Mode
Duration: 60 minutes
Problem: A request routing workflow — submit request, goes to the right approver, approver acts, requester notified.
Explicit Centaur Pattern
This build follows the centaur pattern: distinct human phases and AI phases with clear boundaries between them.
Phase 1 — Human Designs
Define the approval chain, business rules, and notifications before touching AI. No AI yet. Who approves what? What are the thresholds? What happens when an approver is unavailable? Write it all down first — use the planning template below as your guide.
Filled-In Whiteboard Planning Template — Request Routing Workflow
Business Rules
- Requests under $500: Section Leader approves
- Requests $500–$2000: Department Head approves
- Requests over $2000: CO approves
- All requests require justification memo
- Approver has 48 hours to act or request auto-escalates
Data Flow
Requester submits via Power App form → SharePoint list → Notification to approver → Approver acts → Requester notified of decision
Notifications
- Requester: confirmation of submission, approval/denial notification
- Approver: new request alert, 24-hour reminder if not acted on
- Admin: weekly summary of all requests
Edge Cases
- Approver on leave: auto-route to alternate
- Request withdrawn: requester can cancel before approval
- Duplicate submission: system checks for matching requests within 24 hours
Phase 2 — AI Builds
Open GenAI.mil. Feed it your planning template: “I need a SharePoint list to store leave requests. Here are the business rules: [type your rules from Phase 1]. Give me a CSV template I can import.” Import the CSV into SharePoint, then click Integrate → Power Apps → Create an app to auto-generate your starting point. Go back to GenAI.mil: “I have the auto-generated app. Now customize the form to validate inputs and route approvals based on these rules: [type your approval chain from Phase 1].” Paste the AI’s formulas into Power Apps.
GenAI.mil and CamoGPT for Code Generation
When generating Power Apps formulas and Power Automate expressions, GenAI.mil and CamoGPT (Army-managed) produce the same quality of code output as commercial tools. Use them when your prompts include CUI-level unit data such as unit names and approval chains. Anonymize PII (real Marine names, SSNs) unless a PIA authorizes it. See the Approved Tools page.
Complete Build Script: Leave Request Router (Centaur Mode)
The following is the complete step-by-step build sequence. Each prompt block represents what the student submits to the AI. Each code block shows the expected output or result. Instructor callouts indicate teaching moments and verification checkpoints.
When AI Output Doesn't Match
The expected outputs below show what a well-configured agent should produce. Your AI may use different variable names, different ordering, or include extra explanation. This is normal. Check for three things:
- Correct syntax patterns —
.Valueon Choice columns,{Value: "..."}for Patch,[Dynamic Content:]in flows - Correct function names — Power Fx (not JavaScript), WDL in flows (not Power Fx)
- Delegation warnings on data formulas
If those three are right, the output is usable even if it doesn't match word-for-word. If the AI produces JavaScript syntax (forEach, =>) or non-existent functions, re-prompt: "Use Power Fx only, not JavaScript."
Step 1: SharePoint List Setup
I need to create a SharePoint list called "Leave Requests" to track leave requests for 1st Battalion, 99th Marines. The list needs the following columns:
- Requester Name (person field)
- Leave Type (choice: Annual, Convalescent, Emergency, Special Liberty)
- Start Date (date)
- End Date (date)
- Days Requested (calculated: End Date minus Start Date)
- Justification (multiple lines of text)
- Status (choice: Pending, Approved, Denied, Cancelled)
- Approver (person field)
- Date Submitted (date and time, default to today)
- Date Actioned (date and time)
Give me a CSV template I can import into SharePoint to create this list, plus tell me which columns I need to add manually after import.
Leave Requests — Full Structure:
| Display Name | Internal Name | Type | Required | Notes |
|------------------|-----------------|----------------------|----------|--------------------------------------|
| Title | Title | Single line of text | No | Built-in, can hide or repurpose |
| Requester Name | RequesterName | Person | Yes | Add manually after import |
| Leave Type | LeaveType | Choice | Yes | Annual, Convalescent, Emergency, Special Liberty |
| Start Date | StartDate | Date | Yes | Date only |
| End Date | EndDate | Date | Yes | Date only |
| Days Requested | DaysRequested | Calculated (Number) | — | Add after import: =[End Date]-[Start Date] |
| Justification | Justification | Multiple lines text | Yes | |
| Status | Status | Choice | Yes | Pending, Approved, Denied, Cancelled |
| Approver | Approver | Person | No | Add manually after import |
| Date Submitted | DateSubmitted | Date and Time | No | Default: =TODAY() |
| Date Actioned | DateActioned | Date and Time | No | |
CSV Template (save as LeaveRequests.csv):
LeaveType,StartDate,EndDate,Justification,Status,DateSubmitted,DateActioned
Annual,2026-04-01,2026-04-05,Block leave request,Pending,2026-03-16,
Note: Person columns (RequesterName, Approver) and the Calculated column
(DaysRequested) cannot be created via CSV. Add them after import.
Post-import steps:
1. Save CSV in Notepad (UTF-8) → SharePoint → Site Contents → New → List → From CSV
2. Convert LeaveType to Choice: click column → Edit → type "Choice" → add values
3. Convert Status to Choice: same steps → add Pending, Approved, Denied, Cancelled → default "Pending"
4. Add Person column "RequesterName": Create column → Person or Group → Required: Yes
5. Add Person column "Approver": Create column → Person or Group → Required: No
6. Add Calculated column "DaysRequested": Create column → Calculated →
Formula: =[End Date]-[Start Date] → Return type: Number
7. Set DateSubmitted default to =TODAY() in column settings
8. Click Integrate → Power Apps → Create an app
Instructor Checkpoint: Verify Column Types
Have students create the list now. Walk around and verify:
- The calculated column formula is correct (common error: wrong bracket types)
- Required fields are marked correctly
- Default values are set (Status=Pending, Date Submitted=TODAY())
Common student mistake: Students often forget to set the Return Type for calculated columns. The Days Requested formula will fail silently if this is set to "Single line of text" instead of "Number."
Shortcut: Instant App from SharePoint
You do not need to build the app from scratch. Go to the Leave Requests list, click Integrate → Power Apps → Create an app. Power Apps generates a fully functional app connected to your list in seconds — browse screen, detail screen, edit form, all wired up.
This is your starting point. From here, use AI to customize the auto-generated app. The prompt below shows how.
Step 2: Customize the Auto-Generated App
I used Integrate → Power Apps → Create an app from my "Leave Requests" SharePoint list. Power Apps auto-generated a browse screen, detail screen, and edit form. I need to customize the edit form:
- Auto-populate the Requester Name field with the current user (read-only)
- Validation: End Date must be after Start Date
- Validation: Justification must be at least 20 characters
- Show a success message after saving
The auto-generated form uses an EditForm control. Give me the formulas I need to add these customizations.
The auto-generated app already has an EditForm connected to your Leave Requests list
with all fields wired up. You just need to customize these specific parts:
1. AUTO-POPULATE REQUESTER NAME
In the EditForm, find the RequesterName data card. Set the Default of its text input:
// DataCardValue_RequesterName.Default
User().FullName
Then lock it so users can't change it:
// DataCardValue_RequesterName.DisplayMode
DisplayMode.View
2. VALIDATION — ADD TO THE SAVE BUTTON
Find the save button (usually the checkmark icon). Replace its OnSelect with:
// IconAccept1.OnSelect
If(
DataCardValue_EndDate.SelectedDate <= DataCardValue_StartDate.SelectedDate,
Notify("End Date must be after Start Date", NotificationType.Error),
If(
Len(DataCardValue_Justification.Text) < 20,
Notify("Justification must be at least 20 characters", NotificationType.Error),
SubmitForm(EditForm1);
Notify("Leave request submitted successfully", NotificationType.Success);
Navigate(BrowseScreen1, ScreenTransition.None)
)
)
Note: SubmitForm handles all the data writing — it reads every data card
and patches the SharePoint list for you. You do NOT need a manual Patch()
call when using an EditForm. The auto-generated form already handles
Person fields, Choice fields, and date fields correctly.
3. SET STATUS DEFAULT TO "PENDING"
In the Status data card, set the Default:
// DataCardValue_Status.DefaultSelectedItems
{Value: "Pending"}
4. SET DATE SUBMITTED DEFAULT TO TODAY
// DataCardValue_DateSubmitted.DefaultDate
Today()
Instructor Checkpoint: Test Form Submission
Have each student submit a test request. Verify:
- The validation messages appear when End Date <= Start Date
- The validation messages appear when Justification is too short
- A successful submission shows the success notification
- The item appears in the SharePoint list with Status = "Pending"
Key teaching point: The auto-generated app already handles Person fields, Choice fields, and data connections. Students only wrote validation logic and defaults — the hard wiring was done by Power Apps. This is the pattern: generate the foundation, then customize with AI.
Step 3: Routing Logic (Power Automate Flow Setup)
Shortcut: Start from SharePoint
From the Leave Requests list, click Integrate → Power Automate → Create a flow. Select "When a new item is created" — Power Automate pre-wires the trigger to your list automatically. Then use AI to add the routing logic.
I created a Power Automate flow from my "Leave Requests" SharePoint list using Integrate → Power Automate. The trigger "When an item is created" is already connected to the list. Now I need to add routing logic:
- 1-3 days: Route to Section Leader (sectionlead@1bn99marines.mil)
- 4-10 days: Route to Company Commander (co@1bn99marines.mil)
- More than 10 days: Route to Battalion Commander (bn-co@1bn99marines.mil)
The flow should update the Approver field in the SharePoint list with the appropriate person and send them an email notification with the request details. Give me the actions to add after the trigger.
Your trigger is already wired. Add these actions after it:
Step 1: Initialize variable "ApproverEmail"
- Name: ApproverEmail
- Type: String
- Value: (leave blank)
Step 2: Condition — Check days requested
- If: [Dynamic Content: Days Requested] is less than or equal to 3
- Yes: Set variable ApproverEmail to "sectionlead@1bn99marines.mil"
- No: (add nested condition below)
Step 3: (Inside the No branch) Condition — Check if 4-10 days
- If: [Dynamic Content: Days Requested] is less than or equal to 10
- Yes: Set variable ApproverEmail to "co@1bn99marines.mil"
- No: Set variable ApproverEmail to "bn-co@1bn99marines.mil"
Step 4: Update item (SharePoint)
- Site Address: [your site URL]
- List Name: Leave Requests
- Id: [Dynamic Content: ID]
- Approver Claims: [Expression: concat('i:0#.f|membership|', variables('ApproverEmail'))]
Step 5: Send an email (V2) (Office 365 Outlook)
- To: [Expression: variables('ApproverEmail')]
- Subject: New Leave Request Pending Your Approval
- Body:
A new leave request requires your approval:
Requester: [Dynamic Content: Requester Name DisplayName]
Leave Type: [Dynamic Content: Leave Type Value]
Dates: [Dynamic Content: Start Date] to [Dynamic Content: End Date]
Days: [Dynamic Content: Days Requested]
Justification: [Dynamic Content: Justification]
[Dynamic Content: Link to item]
Instructor Checkpoint: Test Routing Logic
Have students submit three test requests with different day counts (2 days, 7 days, 14 days). Check:
- Does the Approver field in SharePoint update correctly?
- Does the right email address receive the notification?
- Are all the dynamic fields populated in the email body?
Common issue: The nested conditions often get the logic backwards. Students should verify by checking the flow run history and tracing which branches executed. Also, the Person field (Approver) update requires Claims format — if it fails, check that the expression uses the concat/Claims pattern.
Step 4: Approval Workflow (Approver Actions)
I need to add approval functionality to the Power Automate flow. After the approver is notified, they should receive an approval action (Approve/Reject) directly in their email or Teams. When they take action:
- If Approved: Update Status to "Approved", set Date Actioned to now
- If Rejected: Update Status to "Denied", set Date Actioned to now
- Include an optional comments field for the approver
Show me how to modify the flow to add this approval action.
Modify your existing flow. Replace Action 5 (Send an email) with an approval action:
Step 5: Start and wait for an approval (Approvals)
- Approval type: Approve/Reject - First to respond
- Title: Leave Request Approval Needed
- Assigned to: [Expression: variables('ApproverEmail')]
- Details:
Requester: [Dynamic Content: Requester Name DisplayName]
Leave Type: [Dynamic Content: Leave Type Value]
Dates: [Dynamic Content: Start Date] to [Dynamic Content: End Date]
Days: [Dynamic Content: Days Requested]
Justification: [Dynamic Content: Justification]
- Item link: [Dynamic Content: Link to item]
- Item link description: View in SharePoint
- IMPORTANT: Open Settings on this action → set Timeout to P2D (2 days)
Step 6: Condition — check approval outcome
- [Dynamic Content: Outcome] is equal to "Approve"
Yes branch:
Step 6a: Update item (SharePoint)
- Id: [Dynamic Content: ID]
- Status Value: Approved
- DateActioned: [Expression: utcNow()]
No branch:
Step 6b: Update item (SharePoint)
- Id: [Dynamic Content: ID]
- Status Value: Denied
- DateActioned: [Expression: utcNow()]
To include approver comments (optional — add ApproverComments column to list first):
Add to both Update item actions:
- ApproverComments: [Dynamic Content: Responses Comments]
Instructor Checkpoint: Test Approval Actions
Students should test both approve and reject paths:
- Submit a new request
- Check email/Teams for the approval action
- Click Approve or Reject
- Verify the SharePoint list item updates correctly
- Verify Date Actioned is populated
Time check: If students are more than 40 minutes into this build, skip the optional Approver Comments field and move to Step 5. They can add it later if time permits.
Steps 5-6 are extensions. The core build (Steps 1-4) is complete. Complete Steps 5-6 if time permits in class, or assign as post-class practice.
Step 5 (Extension): Requester Notification
After the approver takes action, I need to notify the original requester of the decision. Send an email to the requester with:
- Subject line indicating approval or denial
- Summary of their original request
- The approver's name
- Any comments the approver provided
Add this to the existing flow after the approval action.
Add a notification email inside each branch of the Condition (Step 6):
In the Yes (Approve) branch, after Update item:
Step 7a: Send an email (V2) (Office 365 Outlook)
- To: [Dynamic Content: Requester Name Email]
- Subject: Your Leave Request Has Been Approved
- Body:
Your leave request has been APPROVED.
Request Details:
Leave Type: [Dynamic Content: Leave Type Value]
Dates: [Dynamic Content: Start Date] to [Dynamic Content: End Date]
Days: [Dynamic Content: Days Requested]
Approved by: [Dynamic Content: Responses Approver Name]
Comments: [Dynamic Content: Responses Comments]
Your leave is now authorized. Complete any required checkout procedures.
In the No (Reject) branch, after Update item:
Step 7b: Send an email (V2) (Office 365 Outlook)
- To: [Dynamic Content: Requester Name Email]
- Subject: Your Leave Request Has Been Denied
- Body:
Your leave request has been DENIED.
Request Details:
Leave Type: [Dynamic Content: Leave Type Value]
Dates: [Dynamic Content: Start Date] to [Dynamic Content: End Date]
Days: [Dynamic Content: Days Requested]
Denied by: [Dynamic Content: Responses Approver Name]
Comments: [Dynamic Content: Responses Comments]
Contact your approving authority if you have questions.
Instructor Checkpoint: Test End-to-End Flow
Students should now test the complete workflow from start to finish:
- Submit a request via the Power App
- Verify the approver receives the approval action
- Take action (approve or deny)
- Verify the requester receives the outcome notification
- Check the SharePoint list for correct Status and Date Actioned
Check-In Questions at this point:
- "At what points in this build did you verify AI output instead of just trusting it?"
- "What would break if you skipped the testing step and deployed this directly to users?"
- "How would you explain the difference between centaur and cyborg mode to someone who missed the last class?"
Step 6 (Extension): Error Handling
I need to add error handling to the Power Automate flow. Specifically:
- If the flow fails at any step, send an error notification to an admin (admin@1bn99marines.mil)
- If the approver field is empty or invalid, route to a default approver (co@1bn99marines.mil)
- Add a timeout: if the approver doesn't respond within 48 hours, send a reminder email
Show me how to configure this in Power Automate.
Three additions to the flow:
1. TRY/CATCH ERROR HANDLING:
- Select all actions (Steps 1 through 7b) → click "Add a scope" → name it "Try"
- After the Try scope, add a new Scope → name it "Catch"
- Click "..." on the Catch scope → Configure run after → check ONLY "has failed" and "has timed out"
- Inside the Catch scope, add:
Send an email (V2) (Office 365 Outlook)
- To: admin@1bn99marines.mil
- Subject: Leave Request Flow Failure
- Body:
The leave request workflow failed.
Requester: [Dynamic Content: Requester Name DisplayName]
Item ID: [Dynamic Content: ID]
Review the flow run history:
[Expression: concat('https://high.flow.microsoft.us/manage/environments/', workflow()?['tags']?['environmentName'], '/flows/', workflow()?['name'], '/runs/', workflow()?['run']?['name'])]
2. APPROVAL TIMEOUT (do NOT use a Delay action):
- Go back to Step 5 (Start and wait for an approval)
- Open its Settings tab
- Set Timeout to: P2D (2 days = 48 hours)
- After the timeout expires, the flow continues to Step 6 (Condition)
- The Outcome will be empty/timed out — add a third branch:
Step 6c: If Outcome is empty (timed out):
- Send reminder email to approver: [Expression: variables('ApproverEmail')]
- Update Status to "Escalated" or re-route to default approver
Do NOT add a separate Delay action for timeouts. The approval action's
built-in Timeout property handles this correctly. A separate Delay would
run AFTER the approval completes, not in parallel with it.
3. INVALID APPROVER FALLBACK:
- In Steps 2-3 (routing conditions), add a final Else branch
- Set variable ApproverEmail to "co@1bn99marines.mil" (default)
- Add: Send an email (V2) to admin@1bn99marines.mil
Subject: Leave Request Routing Fallback
Body: Could not determine approver for [Dynamic Content: Title].
Defaulted to CO. Days requested: [Dynamic Content: Days Requested].
Instructor Note: Deliberate Complexity
This step introduces significant complexity. Some students will struggle with nested scopes and run-after configuration. This is intentional. Use this moment to reinforce:
- When complexity increases, verification becomes more critical
- AI suggestions for error handling are often incomplete or wrong
- The human must test failure paths, not just success paths
Common student struggle: The 48-hour reminder logic often conflicts with the approval timeout. Students need to understand that the approval action itself has a timeout setting (default 30 days). If they want a 48-hour timeout, they need to set that in the approval action properties, not add a separate delay.
Time warning: If you're at 55+ minutes into this module, skip this step entirely. Students can add error handling as homework. The core centaur pattern is demonstrated in Steps 1-5.
Facilitation Protocol: Build #1
Mid-Build Check-In Questions (Ask at Step 3)
- "What parts of the AI's suggestions did you have to verify or correct so far?"
- "If you had to explain to your CO how you know this workflow is correct, what would you say?"
- "Where do you expect this build to fail when we add the approval workflow?"
Common Student Struggles: Build #1
- Person field syntax errors: The SharePoint Person field has a complex JSON structure. AI often provides outdated syntax. If students see errors, have them check Microsoft's current documentation for the correct @odata.type format.
- Calculated column not updating: SharePoint calculated columns don't recalculate in real-time. Students need to edit and re-save the item to see the calculation. This is a SharePoint limitation, not a build error.
- Flow not triggering: The "When an item is created" trigger sometimes has a 5-10 minute delay in testing environments. If the flow doesn't trigger immediately, wait a few minutes. For faster testing, use "For a selected item" trigger and run manually.
- Approval action not appearing in email: If the approver doesn't see the Approve/Reject buttons in email, check their Outlook settings. Actionable messages must be enabled. Teams approval always works as a fallback.
Build #1 Success Indicators
Students have successfully completed Build #1 when they can demonstrate:
- A Power App that submits a leave request and validates input
- A SharePoint list that stores the request with correct data types
- A Power Automate flow that routes to the correct approver based on days requested
- An approval action that updates the SharePoint list status
- Email notifications to both approver and requester
- They can articulate at least one place where they caught an AI error and corrected it
Phase 3 — Human Verifies
Test every path. Is the right person notified? What about edge cases? Submit test requests through every branch of your approval chain and verify the results against your whiteboard design.
Phase 4 — AI Refines
Back in GenAI.mil, paste your test results: “The flow works for the first approver but skips the second. Here’s the flow logic: [paste]. The business rule is [explain]. Fix the routing logic.” Describe exactly what failed, what you expected, and what happened instead. Copy the corrected code back into Power Platform.
Phase 5 — Human Accepts
Final review. Walk through the complete workflow one more time. Does it match the original design? Is it ready for users?
Key Teaching Point
At every phase boundary, there is a verification checkpoint. The human is responsible for design and verification. The AI handles execution. This separation is what makes centaur mode reliable for critical workflows.
Module 3: Failure Sharing
Duration: 15 minutes (during break)
This is NOT optional. This is one of the most important 15 minutes in the entire program.
Each participant shares:
- What the AI got wrong
- How they caught it
- How they fixed it
Document failures on a whiteboard or shared document. Look for patterns across the group. This becomes the beginning of the unit's frontier map — a living record of where AI works well and where it does not.
Instructor Note
Create psychological safety. Frame failures as valuable data, not embarrassments. Lead by sharing your own failures first. The more specific and honest the sharing, the more useful the frontier map becomes.
Module 4: Build #2 — Iterative Application, Cyborg Mode
Duration: 60 minutes
Problem: A training tracker that pulls from multiple sources and generates reports.
Explicit Cyborg Pattern
No distinct phases. Continuous back-and-forth between human and AI. The boundary between human thinking and AI execution is fluid. In cyborg mode, keep GenAI.mil open alongside Power Platform at all times. The conversation is continuous and the context accumulates in a single GenAI.mil thread.
Start rough in GenAI.mil: “Build me a data model for tracking training completion” → AI gives CSV + list structure → import CSV into SharePoint → click Integrate → Power Apps → Create an app → “I have the auto-generated app. Now I need to customize the input form: [describe changes]” → paste formulas into Power Apps → “That chart doesn’t show what I need. Here’s what I see: [paste current formula]. Change the X axis to show completion by date.”
Build incrementally:
- Data model — AI gives CSV template, you import into SharePoint
- App foundation — Integrate → Power Apps generates the starting point
- Customization — AI refines the auto-generated forms and galleries
- Dashboard — what does leadership need to see at a glance
- Automation — Integrate → Power Automate for flows
- Reports — what gets printed, emailed, or briefed
Evaluate and redirect at each step. You are discovering requirements as you build, not executing a predetermined plan.
GenAI.mil for CUI-Safe Prompting
Training tracker prompts often reference readiness data that may qualify as CUI. Use GenAI.mil or CamoGPT (Army-managed) for CUI-containing prompts. However, PII (real Marine names, SSNs, contact information) must be anonymized even on IL5 tools unless a Privacy Impact Assessment authorizes its use. Substitute fictional names and dates when possible. If using commercial tools, all data must be unclassified and non-sensitive. See the Approved Tools page for guidance.
Training Tracker Dashboard Mockup
This is what the dashboard should look like when built. Use this as the target for the cyborg build:
| Marine | Rank | Annual Rifle Qual | CBRN Awareness | Cyber Awareness | PFT/CFT | SAPR Training |
|---|---|---|---|---|---|---|
| Martinez, J.R. | LCpl | Complete | Complete | Overdue | Complete | Scheduled 15 Mar |
| Thompson, A.K. | Cpl | Complete | Overdue | Complete | Complete | Complete |
| Williams, D.M. | PFC | Scheduled 22 Mar | Complete | Complete | Overdue | Complete |
| Chen, R.L. | LCpl | Complete | Complete | Complete | Complete | Overdue |
| Davis, T.J. | Sgt | Complete | Complete | Overdue | Complete | Complete |
| Rodriguez, M.A. | LCpl | Overdue | Complete | Complete | Scheduled 1 Apr | Complete |
| Summary | 67% Current | 17% Overdue | 16% Scheduled | |||||
Report Output — S-3 Brief Format
As of 06 FEB 2026: Section maintains 67% training currency across five tracked events. Two critical overdue items require immediate action: LCpl Martinez Cyber Awareness (overdue 15 days) and Cpl Thompson CBRN Awareness (overdue 8 days). Section leader has been notified. Three events are scheduled within 30 days. Recommend prioritizing rifle qualification for PFC Williams (range date confirmed 22 Mar) and PFT/CFT for LCpl Rodriguez (scheduled 1 Apr). No systemic compliance gaps identified.
Complete Build Script: Training Tracker Dashboard (Cyborg Mode)
Unlike Build #1, this build does NOT follow a predetermined plan. Students start with a rough idea and refine iteratively. The prompts are shorter, more conversational, and the human makes judgment calls at each step. This demonstrates the cyborg pattern.
Iteration 1: Get Something Visible
I need a training tracker for 1st Battalion, 99th Marines. We need to track which Marines have completed which training events. Start with a SharePoint list that can store: Marine name, training event name, completion date, and status (Complete, Overdue, Scheduled). Give me the SharePoint list structure and a CSV I can import.
Training Tracker — Structure:
| Display Name | Internal Name | Type | Required | Notes |
|-----------------|-----------------|-------------------|----------|-----------------------------|
| Marine Name | MarineName | Person | Yes | Add manually after import |
| Training Event | TrainingEvent | Choice | Yes | See values below |
| Completion Date | CompletionDate | Date | No | Date only |
| Status | Status | Choice | Yes | Complete, Overdue, Scheduled|
| Due Date | DueDate | Date | Yes | Date only |
| Days Until Due | DaysUntilDue | Calculated | — | Add after import |
CSV Template (save as TrainingTracker.csv):
TrainingEvent,CompletionDate,Status,DueDate
Annual Rifle Qual,,Scheduled,2026-06-15
Post-import steps:
1. Convert TrainingEvent to Choice (values: Annual Rifle Qual, CBRN Awareness, Cyber Awareness, PFT/CFT, SAPR Training)
2. Convert Status to Choice (values: Complete, Overdue, Scheduled)
3. Add Person column "MarineName": Create column → Person or Group → Required
4. Add Calculated column "DaysUntilDue": =[Due Date]-TODAY(), return type: Number
(Note: this only recalculates when the item is edited, not daily)
5. Integrate → Power Apps → Create an app
This structure creates one row per training event per Marine.
Instructor Observation Point
Notice that the AI provided a structure, but students haven't thought about how they'll actually use this data yet. In cyborg mode, you build first, evaluate, then iterate. The whiteboard planning happens incrementally, not upfront.
Have students create this list now. Don't let them spend more than 5 minutes on this step.
Iteration 2: Realize the Problem and Pivot
Wait, this structure will create a new row for every training event for every Marine. That's going to be a very long list. I need a dashboard where I can see all five training events for each Marine in one row. How should I restructure this?
Restructure as one row per Marine:
| Display Name | Internal Name | Type | Required | Notes |
|----------------------|---------------------|---------|----------|----------------------------------------|
| Marine Name | MarineName | Person | Yes | Add manually after import |
| Rank | Rank | Choice | Yes | PFC, LCpl, Cpl, Sgt, SSgt |
| Annual Rifle Qual | AnnualRifleQual | Choice | No | Complete, Overdue, Scheduled, Not Started |
| Annual Rifle Qual Date | AnnualRifleQualDate | Date | No | |
| CBRN Awareness | CBRNAwareness | Choice | No | Same values as above |
| CBRN Awareness Date | CBRNAwarenessDate | Date | No | |
| Cyber Awareness | CyberAwareness | Choice | No | Same values |
| Cyber Awareness Date | CyberAwarenessDate | Date | No | |
| PFT/CFT | PFTCFT | Choice | No | Same values |
| PFT/CFT Date | PFTCFTDate | Date | No | |
| SAPR Training | SAPRTraining | Choice | No | Same values |
| SAPR Training Date | SAPRTrainingDate | Date | No | |
CSV Template (save as TrainingDashboard.csv):
Rank,AnnualRifleQual,AnnualRifleQualDate,CBRNAwareness,CBRNAwarenessDate,CyberAwareness,CyberAwarenessDate,PFTCFT,PFTCFTDate,SAPRTraining,SAPRTrainingDate
Cpl,Complete,2026-01-15,Overdue,,Scheduled,2026-04-01,Complete,2025-12-01,Not Started,
Post-import steps:
1. Convert all training status columns to Choice (add values: Complete, Overdue, Scheduled, Not Started)
2. Add Rank as Choice (PFC, LCpl, Cpl, Sgt, SSgt)
3. Add Person column "MarineName" manually
4. Delete the old Training Tracker list
5. Integrate → Power Apps → Create an app
One row per Marine, all five training events visible at a glance. Trade-off: less
flexible if you add new training events later (you'd need to add two new columns).
Instructor Teaching Point: This Is Cyborg Mode
Point out what just happened: The student didn't plan this upfront. They built something, evaluated it, realized it wouldn't work, and pivoted. This is cyborg mode.
The AI offered a tradeoff (flexibility vs. dashboard clarity). In centaur mode, this decision would have been made in the planning phase. In cyborg mode, you discover these tradeoffs as you build.
Have students delete the first list and create the new structure. This is intentional rework. It's part of the cyborg process.
Power BI Prerequisite
Iterations 3-5 use Power BI Desktop. If you have not used Power BI before, pair with someone who has, or skip ahead to Iteration 6 (the Power Automate scheduled report) which does not require Power BI.
Iteration 3: Build Power BI Dashboard (First Draft)
Now I need a Power BI dashboard that connects to this SharePoint list and shows: A table with all Marines and their training status for each event, color-coded (green for Complete, red for Overdue, yellow for Scheduled). Give me the steps to create this in Power BI.
Create a Power BI report connected to your SharePoint list:
1. Open Power BI Desktop
2. Get Data > SharePoint Online List
3. Enter your SharePoint site URL
4. Select "Training Status Dashboard" list
5. Click Load
Create the table visualization:
1. Add a Table visual to the canvas
2. Add these fields to the table:
- Marine Name
- Rank
- Annual Rifle Qual
- CBRN Awareness
- Cyber Awareness
- PFT/CFT
- SAPR Training
3. Add conditional formatting:
- Select the "Annual Rifle Qual" column
- Click Format > Conditional Formatting > Background Color
- Set rules:
- If value = "Complete", background = Green
- If value = "Overdue", background = Red
- If value = "Scheduled", background = Yellow
4. Repeat conditional formatting for all five training columns
5. Add a Card visual showing total number of Marines
6. Add a Pie Chart showing percentage Complete vs. Overdue vs. Scheduled across all training
Your dashboard is now ready to publish to Power BI Service and share with leadership.
INSTRUCTOR: DELIBERATE AI MISTAKE - DO NOT CORRECT YET
The AI's conditional formatting instructions are incomplete and will fail. The background color rules need to be set per column value, but the AI didn't specify that you need to use "Field value" as the formatting basis, not "Rules."
Additionally, the Pie Chart suggestion is wrong. A pie chart showing "percentage Complete vs. Overdue" doesn't make sense with this data structure because each Marine has 5 different status values. The chart would be meaningless.
Let students attempt this and fail. Do not intervene yet. This is a teaching moment. When students raise their hands saying "the conditional formatting isn't working" or "the pie chart looks wrong," ask them:
- "What did the AI tell you to do?"
- "What result did you expect?"
- "What result did you get?"
- "What would you ask the AI to clarify or fix?"
This is frontier recognition in action. The AI gave plausible-sounding instructions that don't actually work. The human has to catch it.
Iteration 4: Fix the Conditional Formatting (Student-Driven)
The conditional formatting isn't working. When I select Background Color and try to set rules, I don't see an option to set color based on the field value. I see options for "Rules" or "Field value" but neither seems to work the way you described. Give me the exact steps for Power BI Desktop to color-code cells based on their text value.
You need to use "Field value" formatting, not "Rules." Here's the correct procedure:
1. In your Table visual, click on the column header for "Annual Rifle Qual"
2. In the Format pane, expand "Conditional formatting"
3. Click "Background color"
4. In the dialog:
- Format style: Field value
- Based on field: Annual Rifle Qual (should auto-select)
- Click "Advanced controls"
- For each value, set the color:
- Complete: Green (#00B050)
- Overdue: Red (#FF0000)
- Scheduled: Yellow (#FFFF00)
- Not Started: Gray (#D3D3D3)
5. Click OK
Repeat this process for each of the five training event columns.
Note: You must do this for each column individually. There's no way to apply the same
conditional formatting rule to multiple columns at once in Power BI Desktop.
For the summary metrics, instead of a pie chart, create:
- A Card showing "Total Complete" (count where any status = Complete)
- A Card showing "Total Overdue" (count where any status = Overdue)
- A Stacked Bar Chart showing count of each status across all five training events
This gives leadership a clearer picture of overall training readiness.
Instructor Debrief: Failure Recognition
After students successfully fix the conditional formatting, pause and debrief:
- "What was wrong with the AI's first answer?" (It gave incomplete steps)
- "How did you know it was wrong?" (They tested it and it didn't work)
- "What did you do to fix it?" (Asked the AI for clarification with specifics about what failed)
This is the core EDD skill: test, recognize failure, refine the prompt, iterate. The AI is not always right on the first try. The human must verify and course-correct.
Iteration 5: Add Days Until Due Calculation
I need to add a column that shows how many days until each training event is due. Each training event has an annual recurrence. For example, if a Marine completed Annual Rifle Qual on 15 Jan 2025, it's due again on 15 Jan 2026. I want the dashboard to show "Due in X days" or "Overdue by X days" for each event. How do I add this to the Power BI report?
You'll need to create calculated columns in Power BI for each training event.
In Power BI Desktop, go to the Data view (left sidebar, table icon).
Create a new column for "Rifle Qual Days Until Due":
Rifle Qual Days Until Due =
VAR CompletionDate = 'Training Status Dashboard'[Annual Rifle Qual Date]
VAR DueDate = DATE(YEAR(CompletionDate) + 1, MONTH(CompletionDate), DAY(CompletionDate))
VAR DaysUntil = DueDate - TODAY()
RETURN
IF(
ISBLANK(CompletionDate),
BLANK(),
IF(
DaysUntil < 0,
"Overdue by " & ABS(DaysUntil) & " days",
"Due in " & DaysUntil & " days"
)
)
Repeat this formula for each training event, changing the column names:
- CBRN Awareness Days Until Due
- Cyber Awareness Days Until Due
- PFT/CFT Days Until Due
- SAPR Training Days Until Due
Add these new calculated columns to your Table visual to show leadership exactly how
many days remain for each training requirement.
Instructor Checkpoint: Test the Calculation
Have students add test data to the SharePoint list with dates in the past (to generate "Overdue by X days") and verify the calculated columns work correctly.
Common issue: If the completion date is blank, the formula returns BLANK(). Students need to decide what to display in that case. Do they want "Not Started" or "No data" or leave it blank? This is a judgment call, not a technical problem.
Iteration 6: Generate S-3 Brief Report
I need to generate a text summary for an S-3 brief based on this data. The summary should state the overall training readiness percentage, list any critical overdue items (overdue by more than 7 days), and highlight upcoming scheduled training. Give me a Power Automate flow that runs weekly and generates this summary, then emails it to the S-3 (s3@1bn99marines.mil).
Create a Scheduled Cloud Flow in Power Automate:
Step 1: [Trigger] Recurrence
- Interval: 1 week
- Day: Monday
- Time: 08:00
Step 2: Get items (SharePoint)
- Site: [your site URL]
- List: Training Status Dashboard
- Top Count: 5000 (IMPORTANT — default is 100, will miss Marines)
Step 3: Initialize variable "TotalEvents" (Integer, value: 0)
Step 4: Initialize variable "CompleteEvents" (Integer, value: 0)
Step 5: Initialize variable "OverdueItems" (String, value: "")
Step 6: Initialize variable "ScheduledItems" (String, value: "")
Step 7: Apply to each
- Input: [Dynamic Content: value] from Get items
- Settings → Concurrency: 1 (keep sequential — we're incrementing counters)
Inside the loop, for EACH of the 5 training columns:
Condition: [Dynamic Content: Annual Rifle Qual Value] equals "Complete"
Yes: Increment variable CompleteEvents by 1
No: Condition: equals "Overdue"
Yes: Append to string variable OverdueItems:
[Expression: concat(items('Apply_to_each')?['MarineName']?['DisplayName'], ' - Annual Rifle Qual', '\n')]
(Repeat for CBRN, Cyber, PFT/CFT, SAPR — 5 conditions total)
Increment variable TotalEvents by 5
Step 8: Compose — calculate readiness percentage
[Expression: mul(div(float(variables('CompleteEvents')), float(variables('TotalEvents'))), 100)]
Note: float() prevents integer division from returning 0.
Step 9: Send an email (V2) (Office 365 Outlook)
- To: s3@1bn99marines.mil
- Subject: [Expression: concat('Weekly Training Readiness - ', formatDateTime(utcNow(), 'dd MMM yyyy'))]
- Body:
Battalion Training Readiness: [Dynamic Content: Outputs from Step 8]%
across 5 tracked events.
CRITICAL OVERDUE (>7 days):
[Expression: if(empty(variables('OverdueItems')), 'None', variables('OverdueItems'))]
UPCOMING SCHEDULED:
[Expression: if(empty(variables('ScheduledItems')), 'None', variables('ScheduledItems'))]
Instructor Note: Time Check
If you're at 40+ minutes into Build #2, skip this iteration. The core cyborg pattern has been demonstrated in Iterations 1-5. The automated report is an enhancement, not a core requirement.
If you have time, let students attempt this. The loop logic is complex and many will struggle. Use this as another opportunity to practice iterative refinement with the AI.
Facilitation Protocol: Build #2
Mid-Build Check-In Questions (Ask at Iteration 3)
- "How is this build process different from Build #1? What feels different about the way you're working with the AI?"
- "At what point did you realize the first data structure wouldn't work? What made you notice that?"
- "If I asked you to add a sixth training event (Combat Lifesaver) right now, would it be easier with the first data structure or the second? Why?"
Common Student Struggles: Build #2
- Power BI conditional formatting not applying: Students often try to select multiple columns and apply formatting at once. Power BI requires you to format each column individually. This is tedious but necessary.
- Calculated columns showing errors: The DAX formula syntax is case-sensitive and picky about date functions. If students see #ERROR, have them check the column names in the formula match exactly (including spaces).
- SharePoint list not updating in Power BI: Power BI Desktop doesn't auto-refresh. Students need to click "Refresh" in the Home tab to see new SharePoint data. This is a frequent point of confusion.
- Confusion about when to use cyborg vs centaur: Reinforce that cyborg mode is for exploratory builds where requirements are unclear. Centaur mode is for well-defined workflows. This build is intentionally exploratory to demonstrate the difference.
If students are behind at Iteration 4: Skip Iteration 6 (automated report) and move directly to the debrief. The core learning objective is the cyborg pattern and failure recognition, both of which are covered in Iterations 1-5.
Build #2 Success Indicators
Students have successfully completed Build #2 when they can demonstrate:
- A SharePoint list with one row per Marine and columns for each training event
- A Power BI dashboard showing training status in a table format
- Conditional formatting that color-codes status (green/red/yellow)
- They restructured their data model at least once based on evaluation of the first attempt
- They caught at least one AI error (especially the conditional formatting or pie chart issue)
- They can explain how cyborg mode differs from centaur mode using this build as an example
Key Teaching Point
Cyborg mode is faster but requires more constant attention. You cannot walk away and let the AI run. Best for situations where you are discovering requirements as you build — where you do not fully know what you want until you see it taking shape.
Module 5: Build #3 — Your Problem, Choose Your Mode
Duration: 60 minutes
Each participant builds the solution they identified during Builder Orientation. This is their real problem with their real requirements.
Before starting, each participant states:
- What they are building (one sentence)
- Which mode (centaur or cyborg) and why
- Where they expect frontier issues — the places where AI is likely to struggle
Data Handling Reminder
Build #3 uses real unit problems. CUI is authorized on GenAI.mil and CamoGPT (Army-managed). PII (names, SSNs, contact info) must be anonymized even on IL5 tools unless a PIA authorizes it. Commercial tools (ChatGPT, Gemini via web) are for unclassified data only.
Expected Detail Level for Build Declarations
Each student's build declaration should be specific enough that the instructor can assess scope and feasibility in under one minute. "What they are building" should name the tool type and the user (e.g., "A vehicle dispatch tracker for the motor pool chief," not "something to help with vehicles"). The mode selection must include a reason tied to the task characteristics (e.g., "Centaur, because the approval chain has strict business rules that need upfront design"). Frontier issues should list at least two concrete risks (e.g., "AI may not know our unit's leave approval thresholds" or "SharePoint Person fields have caused errors in previous builds"). If a student's declaration is vague or lists zero frontier issues, have them revise before starting the build.
Decomposition Template
Fill this out before writing your first prompt:
- Data: What are you tracking? What columns does the list need?
- Users: Who creates the data? Who reads it? Who updates it? Who approves it?
- Workflow: What happens when a new item is created? Does it need routing or approval?
- Output: What does leadership need to see? Dashboard, report, email summary?
- Tech stack: SharePoint list → Power App (Integrate) → Power Automate flow? Or simpler?
This takes 5 minutes and prevents the most common Build #3 failure: starting to code before knowing what you're building.
Instructor Note
Resist solving every problem. When a participant gets stuck, ask: "What would you tell the AI about what is going wrong?" This builds the self-sufficiency they need to work independently after the course.
Assessment Criteria for All Three Builds
Use this rubric to evaluate student work on Builds #1, #2, and #3. The goal is not perfection but demonstration of EDD principles: verification, frontier recognition, and appropriate use of centaur vs cyborg modes.
| Criteria | Exceeds Standard | Meets Standard | Developing |
|---|---|---|---|
| Functional Prototype Does the tool work end-to-end as intended? |
Tool works completely with no errors. All edge cases handled. Ready for user testing. | Tool works for happy path (standard use case). Some edge cases not handled, but core functionality is solid. Would work for pilot users with caveats. | Tool has errors or fails on basic use cases. Core functionality incomplete or broken. Not ready for any users. |
| Centaur Pattern Applied (Build #1) Did the student follow distinct human/AI phases? |
Clear whiteboard design phase before AI build. Verification checkpoints documented at each phase boundary. Student can articulate what the AI was responsible for vs. what they verified themselves. | Whiteboard design exists but may be incomplete. Some verification happened, though not systematically. Student understands the centaur concept even if execution was imperfect. | No upfront design, or student skipped directly to AI build without planning. No clear verification checkpoints. Doesn't understand why centaur mode matters. |
| Cyborg Pattern Applied (Build #2) Did the student iterate and refine continuously? |
Clear evidence of multiple iterations. Student pivoted based on evaluating AI output. Data structure or approach changed at least once. Student can explain why they chose cyborg mode for this build. | Some iteration occurred. Student made at least one course correction based on testing. May have started with a plan but adapted it during build. Understands cyborg concept. | Tried to plan everything upfront (centaur approach) instead of iterating. No pivots or course corrections. Doesn't understand when to use cyborg mode. |
| Independent Problem-Solving (Build #3) Can the student work without instructor help? |
Student encountered errors or issues and resolved them independently using AI iteration. Asked instructor clarifying questions but didn't need step-by-step guidance. Demonstrated self-sufficiency. | Student needed some instructor guidance but applied it effectively. When stuck, was able to articulate the problem and attempt solutions. Showed progress toward independence. | Student needed constant instructor intervention. Could not diagnose issues or articulate problems clearly. Waited for instructor to solve problems instead of attempting solutions. |
| Verification Steps Included Did the student test and verify AI output? |
Student tested all major paths, including error cases. Documented at least one AI mistake they caught and corrected. Can explain their verification process clearly. Used test data deliberately to probe edge cases. | Student tested happy path and caught at least one error. Verification was less systematic but happened. Can identify at least one place where they had to correct AI output. | Student assumed AI output was correct without testing. No documented verification steps. Could not identify any AI mistakes, suggesting they didn't look for them. |
| Frontier Recognition Can the student identify where AI works well vs. poorly? |
Student can clearly articulate at least three areas where AI struggled or required human judgment. Contributed specific examples to the frontier map. Understands the limitations of AI for this type of work. | Student identified at least one frontier issue (e.g., "the conditional formatting didn't work as AI described"). Contributed to frontier map discussion. Beginning to recognize patterns in AI limitations. | Student could not identify any specific AI failures or limitations. Either didn't encounter issues (unlikely) or didn't recognize them. Cannot contribute meaningfully to frontier map. |
Grading Guidance
This is a formative assessment, not summative. The rubric is for feedback and coaching, not for pass/fail decisions. If a student is "Developing" in any category, that indicates where they need additional practice or guidance.
Most important indicator: Verification Steps Included. If a student is not testing and verifying AI output, they are not ready for independent work. This is the core EDD discipline.
Students should self-assess using this rubric at the end of Build #3. Ask them: "Where do you think you are on each of these criteria?" This builds self-awareness and metacognition.
Module 6: Frontier Map Update and Wrap
Duration: 15 minutes
Compile all failure cases from the session into a shared frontier map. This is the collective intelligence of the group — a record of what AI can and cannot do for your specific use cases.
| Task Type | AI Handles Well | AI Handles Poorly | Verification Needed |
|---|---|---|---|
| SharePoint list schema design | Column types, relationships, basic validation rules | Complex calculated columns with nested IF logic | Always verify column data types match actual data |
| Power App form layout | Basic screens, navigation, standard input controls | Conditional visibility with 3+ dependencies | Test every show/hide rule with real data |
| Power Automate flows | Simple approval flows, email notifications, basic conditions | Error handling for nested conditions, retry logic | Test all error paths, not just the happy path |
| Data validation formulas | Required fields, format checks, simple lookups | Cross-table validation, date range conflicts | Check edge cases: empty fields, special characters |
| Dashboard/reporting | Basic charts, card layouts, simple filters | Complex DAX measures, row-level security | Verify numbers against source data manually |
| User interface text/labels | Button labels, headers, help text | Context-sensitive error messages, domain-specific jargon | Have an actual user read every message |
Completing Your Frontier Map
The table above is a starter example. Each student should add at least 2-3 rows specific to the task types they encountered during today's builds. A useful frontier map row names a specific task type (not a generic category), gives concrete examples of what AI produced correctly in the "Handles Well" column, describes a specific failure with enough detail to recognize it next time in the "Handles Poorly" column, and states a testable verification action (not just "check it"). If your rows read like the example above, make them more specific to your actual experience.
Assignment Before Advanced Workshop
- Complete Build #3 to a deployable state
- Run through the EDD SOP QA process
- Document three failure cases with specifics: what failed, how you caught it, how you fixed it
- Identify one area where AI capability surprised you — something it did better than you expected