Overview & Learning Objectives
By the end of this course, you will be able to:
- Build 3 tools using centaur and cyborg work patterns on Power Platform
- Practice deliberate mode-switching — choosing the right pattern for the right problem
- Create your first frontier map documenting where AI works well and where it struggles
- Apply the full EDD build process from design through verification
Before You Begin
This page assumes you completed Builder Orientation and attempted your first build. You should already be familiar with the centaur/cyborg distinction, the EDD SOP, and basic Power Platform navigation. If any of that is unfamiliar, go back and complete Builder Orientation first.
Module 1: Setup and Review
Duration: 15 minutes
Step 1: Verify Your Access
Before building anything, confirm you can reach all three platforms you will use today. Open each link and verify you can log in:
- SharePoint: Navigate to your unit SharePoint site and confirm you can create a new list
- Power Apps: Go to
make.powerapps.comand verify you see the Home screen - Power Automate: Go to
make.powerautomate.comand verify you can create a new flow
Access Problems?
If you cannot log in to any of these platforms, contact your IT support before proceeding. You will need all three for every build in this course. Do not skip this step — resolving access issues during a build wastes significant time.
Step 2: Centaur vs. Cyborg Review
You learned these two patterns in Builder Orientation. Here is a quick reference you will use throughout the session:
| Pattern | How It Works | Best For | Key Risk |
|---|---|---|---|
| Centaur | Clear phases: human designs, AI builds, human verifies | High-stakes, accuracy-critical work | Slower but reliable |
| Cyborg | Continuous back-and-forth, fluid boundaries | Creative, iterative, discovery work | Faster but requires constant attention |
Self-Check: Which mode would you use for building a leave tracking system for your section? Why?
Centaur mode is the better choice here. Leave tracking involves specific regulations, approval chains, and date calculations that must be accurate. You want to design all the business rules up front (who approves, what the limits are, what happens with overlapping requests), then let AI build it, then verify every rule works correctly. Getting leave calculations wrong has real consequences for Marines, so the slower-but-reliable centaur pattern is appropriate.
Cyborg mode could work for the dashboard or reporting layer once the core logic is verified — but the foundational workflow should be centaur.
Module 2: Build #1 — Structured Workflow (Centaur Mode)
Duration: 60 minutes
Problem: 1st Bn, 99th Marines needs a request routing workflow. Someone submits a request, it goes to the right approver based on the dollar amount, the approver acts on it, and the requester gets notified of the decision. You will build this on Power Platform using the centaur pattern.
Explicit Centaur Pattern
This build follows five distinct phases with clear boundaries between them. At each boundary, you — the human — verify before moving on.
Phase 1 — Human Designs (You Do This)
Before touching AI, define everything on paper or a whiteboard. Map out every business rule, every data flow, every edge case. This is the most important phase — the quality of your design determines the quality of the final product.
Whiteboard Planning Template — Request Routing Workflow
Business Rules
- Requests under $500: Section Leader approves
- Requests $500–$2000: Department Head approves
- Requests over $2000: CO approves
- All requests require justification memo
- Approver has 48 hours to act or request auto-escalates
Data Flow
Requester submits via Power App form → SharePoint list → Notification to approver → Approver acts → Requester notified of decision
Notifications
- Requester: confirmation of submission, approval/denial notification
- Approver: new request alert, 24-hour reminder if not acted on
- Admin: weekly summary of all requests
Edge Cases
- Approver on leave: auto-route to alternate
- Request withdrawn: requester can cancel before approval
- Duplicate submission: system checks for matching requests within 24 hours
Copy or recreate this template for your own planning. The key is that you made every decision here — the AI has not been involved yet.
Phase 2 — AI Builds (Prompt Sequence)
Now feed your design to AI, one component at a time. Use these prompts in sequence. Each one builds on the previous output.
Phase 3 — Human Verifies
This is the critical checkpoint. Walk through the AI output and verify it against your planning template. Do not skip any item on this list.
Verification Checklist
- TEST: Submit a $400 request. Verify it routes to Section Leader, not Department Head.
- TEST: If possible, let a request sit without action and verify the 48-hour reminder fires.
- TEST: After submitting, check that the requester can view their request status in the app.
- TEST: Submit a request, then cancel it before approval. Verify the status updates correctly.
- Are all notification messages clear and accurate?
- Does the duplicate check actually catch matching requests within 24 hours?
Phase 4 — AI Refines
Feed your verification findings back to AI. Be specific about what failed, what you expected, and what happened instead.
Phase 5 — Human Accepts
Final walkthrough. Run through the entire workflow one more time from start to finish:
- Submit a test request at each dollar threshold ($400, $1200, $3000)
- Verify each routes to the correct approver
- Approve one, deny one, let one time out
- Confirm all notifications fire correctly
- Test the withdrawal path
- Verify the admin weekly summary includes all test requests
If everything passes, your Build #1 is complete. If not, return to Phase 4 and iterate.
Key Teaching Point
At every phase boundary, there is a verification checkpoint. The human is responsible for design and verification. The AI handles execution. This separation is what makes centaur mode reliable for critical workflows. Notice that you never let the AI decide what to build — only how to build what you already designed.
Module 3: Reflection — What Went Wrong
Duration: 15 minutes
Before moving on to your next build, take 15 minutes to reflect on what happened during Build #1. Every builder encounters AI failures — the skill is in recognizing them quickly and knowing how to recover. This structured reflection replaces what would be a group failure-sharing session during live instruction.
Do not skip this reflection. The patterns you identify here will save you hours on Build #3. Take 10 minutes to write honest answers.
Build #1 Reflection Template
What did the AI get wrong during Build #1?
(Be specific. Which component? What was the exact error?)
_______________________________________________________________
_______________________________________________________________
How did you catch it?
(During which phase? What test or review revealed the problem?)
_______________________________________________________________
_______________________________________________________________
How did you fix it?
(What did you tell the AI? Did it fix the issue on the first try?)
_______________________________________________________________
_______________________________________________________________
What would you check for next time?
(What will you add to your personal verification checklist?)
_______________________________________________________________
_______________________________________________________________
Save this reflection. You will use it when you build your frontier map in Module 6.
Self-Check: What is the most common type of error AI makes in Power Automate flows?
Error handling for nested conditions and missing edge case paths. AI is good at building the "happy path" — the flow that works when everything goes as expected. But it frequently misses error handling for nested conditions (what happens when a condition inside another condition fails?) and omits edge case paths entirely (what if the approver field is empty? what if the amount is exactly $500?).
This is why Phase 3 verification is non-negotiable. You must test the paths the AI is most likely to forget.
Module 4: Build #2 — Training Tracker (Cyborg Mode)
Duration: 60 minutes
Problem: 1st Bn, 99th Marines needs a training tracker that shows completion status across the section and generates reports for leadership. You will build this using the cyborg pattern — continuous back-and-forth with AI, discovering requirements as you go.
Target Dashboard
This is what your dashboard should look like when complete. Use this as your target while building:
| Marine | Rank | Annual Rifle Qual | CBRN Awareness | Cyber Awareness | PFT/CFT | SAPR Training |
|---|---|---|---|---|---|---|
| Martinez, J.R. | LCpl | Complete | Complete | Overdue | Complete | Scheduled 15 Mar |
| Thompson, A.K. | Cpl | Complete | Overdue | Complete | Complete | Complete |
| Williams, D.M. | PFC | Scheduled 22 Mar | Complete | Complete | Overdue | Complete |
| Chen, R.L. | LCpl | Complete | Complete | Complete | Complete | Overdue |
| Davis, T.J. | Sgt | Complete | Complete | Overdue | Complete | Complete |
| Rodriguez, M.A. | LCpl | Overdue | Complete | Complete | Scheduled 1 Apr | Complete |
| Summary | 67% Current | 17% Overdue | 16% Scheduled | |||||
Report Output
Your tracker should also generate this kind of summary for leadership briefs:
Report Output — S-3 Brief Format
As of 06 FEB 2026: Section maintains 67% training currency across five tracked events. Two critical overdue items require immediate action: LCpl Martinez Cyber Awareness (overdue 15 days) and Cpl Thompson CBRN Awareness (overdue 8 days). Section leader has been notified. Three events are scheduled within 30 days. Recommend prioritizing rifle qualification for PFC Williams (range date confirmed 22 Mar) and PFT/CFT for LCpl Rodriguez (scheduled 1 Apr). No systemic compliance gaps identified.
Guided Cyborg Build
Unlike the centaur build, you will not plan everything up front. Start with a rough idea and iterate. Use these prompts as your starting sequence, then continue the conversation based on what the AI produces.
Notice the difference: Unlike Build #1, you are not planning everything up front. You are discovering requirements as you iterate. This is deliberate — cyborg mode trades planning time for iteration speed. Stay engaged with every output.
Review the output. Does the data model make sense? Are the right columns included? Adjust and continue:
Check the formatting. Does it match the target dashboard above? Now add the reporting layer:
From here, continue iterating based on what you see. You might need to adjust column types, fix formatting, add filters, or refine the report language. That is the cyborg pattern — you are discovering what works through rapid iteration.
Key Teaching Point
Cyborg mode is faster but requires constant attention. You cannot walk away and let the AI run. You are evaluating and redirecting at each step, discovering requirements as you build rather than executing a predetermined plan. Notice how different this feels from the centaur build — that difference is deliberate, and knowing when to use which pattern is one of the core skills this course develops.
Self-Check: Does your tracker match the target?
Compare your dashboard to the target table above. Verify:
- Does it display all Marines with their training statuses?
- Are overdue items visually distinct (red or highlighted)?
- Does the summary show correct percentages?
- Can you generate the S-3 brief format report?
If any are missing, revisit the prompts in this module and iterate.
Module 5: Build #3 — Your Problem
Duration: 60 minutes
Now build a solution to a real problem you face in your unit. This is the tool you identified during Builder Orientation — your actual requirement, your actual data, your actual users.
Before You Start
Answer these three questions before writing a single prompt:
- What are you building? (Describe it in one sentence.)
- Which mode will you use — centaur or cyborg — and why?
- Where do you expect frontier issues? (Where is AI likely to struggle with your specific problem?)
Build #3 Decomposition Template
Problem Statement
(One sentence: who has the problem, what is the problem, what does a solution look like?)
_______________________________________________________________
Mode Selection
(Centaur or Cyborg? Why?)
_______________________________________________________________
Components to Build
(Break your solution into 3-5 pieces. What platform tools will you use for each?)
- _______________________________________________________________
- _______________________________________________________________
- _______________________________________________________________
- _______________________________________________________________
- _______________________________________________________________
Expected Frontier Issues
(Where will AI likely struggle? What will you need to verify most carefully?)
- _______________________________________________________________
- _______________________________________________________________
- _______________________________________________________________
Verification Plan
(How will you know it works? What tests will you run?)
- _______________________________________________________________
- _______________________________________________________________
- _______________________________________________________________
Need a reference for what a good decomposition looks like? See the completed example in Builder Orientation Module 4.
Completion Criteria
A complete decomposition should include: a clear tool name and purpose, the selected mode (Centaur or Cyborg) with a reason, 3–6 components to build, at least 2 specific data sources, and 3+ verification criteria. If you cannot fill all fields, revisit your problem definition.
Getting Stuck?
When you hit a wall, ask yourself: "What would I tell the AI about what is going wrong?" If you can describe the problem clearly, you can prompt the AI to fix it. If you cannot describe it, that is a sign you need to step back and think through the design before asking AI for more code.
Module 6: Frontier Map Update
Duration: 15 minutes
A frontier map records what AI can and cannot do for your specific use cases. Based on everything you experienced in Builds #1 through #3, fill in or update the map below. This becomes a living reference you will use on every future build.
| Task Type | AI Handles Well | AI Handles Poorly | Verification Needed |
|---|---|---|---|
| SharePoint list schema design | Column types, relationships, basic validation rules | Complex calculated columns with nested IF logic | Always verify column data types match actual data |
| Power App form layout | Basic screens, navigation, standard input controls | Conditional visibility with 3+ dependencies | Test every show/hide rule with real data |
| Power Automate flows | Simple approval flows, email notifications, basic conditions | Error handling for nested conditions, retry logic | Test all error paths, not just the happy path |
| Data validation formulas | Required fields, format checks, simple lookups | Cross-table validation, date range conflicts | Check edge cases: empty fields, special characters |
| Dashboard/reporting | Basic charts, card layouts, simple filters | Complex DAX measures, row-level security | Verify numbers against source data manually |
| User interface text/labels | Button labels, headers, help text | Context-sensitive error messages, domain-specific jargon | Have an actual user read every message |
How to use your frontier map: Update it monthly as you gain experience. Before starting any new build, consult your map to identify frontier risks early. Share it with your section so everyone benefits from your lessons learned.
Frontier Map Completion Criteria
Your frontier map should have at least 5 task types relevant to your role. Each cell should contain a specific example, not just “yes” or “no.” For example, instead of writing “handles well” under AI Handles Well, write “Generates draft SOPs from bullet points with correct formatting.”
Update this map with your own observations. Where did your experience differ from the table above? Add rows for task types specific to your Build #3 problem.
Assignment
Before Advanced Workshop
- Complete Build #3 to a deployable state
- Run through the EDD SOP QA process
- Document 3 failure cases with specifics: what failed, how you caught it, how you fixed it
- Identify one area where AI surprised you — something it did better than you expected
Bring all of this to the Advanced Workshop. Your failure cases and frontier map will be the foundation for that session.
Capstone Deliverable
Working Tool Prototype
Build one functional Power Platform tool that solves a real problem identified in your Problem Definition. Document the build process in your Development Journal with at least 3 session entries.
Knowledge Check
Knowledge Check
You need to build a leave tracking system that involves specific regulations, approval chains, and date calculations. Which work pattern is most appropriate and why?
Centaur mode is correct. Leave tracking involves regulations, approval chains, and date calculations that must be accurate. You design all the business rules up front, let AI build, then verify every rule works. The slower-but-reliable centaur pattern is appropriate when getting calculations wrong has real consequences.
Why does the course require you to verify access to SharePoint, Power Apps, and Power Automate before starting any build?
Discovering access issues mid-build is one of the biggest time wasters. Since every build in this course requires all three platforms, verifying access up front prevents costly interruptions and ensures you can focus on learning the build patterns rather than troubleshooting login problems.
What is the key risk associated with cyborg mode compared to centaur mode?
Cyborg mode trades planning time for iteration speed. The fluid, continuous back-and-forth is faster, but you must stay engaged and evaluate every output. Without constant attention, errors can compound quickly because there are no formal verification checkpoints between phases.
In the centaur pattern for Build #1, why is it critical that the human completes the whiteboard planning phase before involving AI?
In centaur mode, the human is responsible for all design decisions. By mapping out every business rule, data flow, and edge case before AI is involved, you ensure the AI builds what you designed rather than making its own design decisions. The AI should decide how to build, never what to build.
During Phase 3 (Human Verifies) of the centaur build, you discover that a $400 request routes to the Department Head instead of the Section Leader. What is the correct next step?
The centaur pattern includes a refinement phase (Phase 4) specifically for this situation. You feed your verification findings back to AI with specific details: what you expected, what happened instead. Being specific about failures helps the AI produce accurate fixes. Skipping errors or restarting entirely both waste the work already done.
Why does the centaur build use a sequence of separate prompts (one for SharePoint, one for Power Apps, one for Power Automate) rather than a single prompt for the entire system?
Breaking the build into sequential prompts maintains the centaur principle of verification at each boundary. Each component (data layer, input form, automation flow) can be verified against the design before the next component is built on top of it. This prevents errors from cascading through the entire system.
What is the most common type of error AI makes when building Power Automate flows?
AI excels at building the "happy path" but frequently misses error handling for nested conditions and omits edge case paths entirely. For example, it might not account for what happens when the approver field is empty or when the amount is exactly at the threshold. This is why Phase 3 verification must focus on testing the paths AI is most likely to forget.
Why does the course require a structured reflection after Build #1 instead of immediately starting Build #2?
The reflection is not a break or a formality. The patterns you identify from Build #1 failures directly inform how you approach Build #3 and become source material for your frontier map in Module 6. Recognizing AI failure patterns quickly and knowing how to recover is a core skill that develops through deliberate reflection.
Your Build #1 reflection reveals that the AI-generated flow handled the standard approval path correctly but failed when a request was submitted with an amount of exactly $500. What does this failure tell you about AI-generated workflows?
Boundary conditions (values that fall exactly on threshold limits) are a classic edge case that AI often mishandles. The AI might use "less than" when it should use "less than or equal to," causing requests at exactly $500 to route incorrectly. This is why your verification checklist should always include tests at exact threshold values, not just values clearly above or below them.
Build #2 (Training Tracker) uses cyborg mode instead of centaur mode. What is the fundamental difference in how you discover requirements?
Cyborg mode trades planning time for iteration speed. Instead of defining every business rule on a whiteboard before involving AI, you start with a rough idea and discover requirements as you iterate. You are constantly evaluating and redirecting the AI, adjusting layout, field order, and validation rules based on what each iteration produces.
While building the training tracker dashboard in cyborg mode, the AI generates a summary showing "67% Current" but your manual count shows 70%. What should you do?
Even in cyborg mode, data accuracy requires verification. The discrepancy likely stems from a difference in calculation methodology (e.g., counting rows vs. distinct values, or averaging individual percentages vs. computing an overall rate). The correct approach is to investigate how the AI computed the number, align the formula with your intended methodology, and verify the corrected output against your manual count.
The training tracker needs conditional formatting: green for Complete, red for Overdue, yellow for Scheduled. After the AI applies formatting, you notice "Scheduled 15 Mar" items appear in green instead of yellow. What is the most likely cause?
AI frequently writes conditional formatting that checks for exact string matches (e.g., Status = "Scheduled") when the actual data contains additional text (e.g., "Scheduled 15 Mar"). The condition fails to match, so the item falls through to the default color. The fix is to use a "starts with" or "contains" check instead of an exact match. This is a common pattern where AI handles the happy path but misses real-world data variations.
Before starting Build #3, the course requires you to answer three specific questions. What is the purpose of answering "Where do you expect frontier issues?" before writing any prompts?
Anticipating frontier issues before building allows you to plan your verification strategy. If you know AI typically struggles with a particular aspect of your problem (e.g., complex date logic, domain-specific validation), you can allocate more time and attention to verifying those components. This proactive approach prevents the costly cycle of building, discovering failures late, and rebuilding.
The Build #3 Decomposition Template asks you to break your solution into 3-5 components. Why is decomposition important for an AI-assisted build?
Decomposition enables targeted verification. When you break a solution into discrete components, you can verify each one independently against your design. If a single component fails, you can fix it without rebuilding the entire system. This applies regardless of whether you use centaur or cyborg mode for the build.
You are stuck on Build #3 and cannot figure out what is wrong with the AI output. According to the course guidance, what should you do?
The key insight is: if you can describe the problem clearly, you can prompt the AI to fix it. If you cannot describe it, that signals a design problem, not an AI problem. Stepping back to clarify your own understanding of what should happen is more productive than generating more code against a poorly understood requirement.
A frontier map documents what AI can and cannot do for your use cases. Why is specificity important when filling in the "AI Handles Poorly" column?
Generic entries like "handles poorly" provide no actionable guidance. Specific entries like "error handling for nested conditions" or "complex calculated columns with nested IF logic" tell you exactly what to watch for on your next build. The more specific your frontier map, the faster you can identify risk areas and plan verification accordingly.
The example frontier map shows that AI handles "basic charts, card layouts, simple filters" well for dashboards, but handles "complex DAX measures, row-level security" poorly. How should this information change your approach when building a dashboard?
The frontier map enables targeted use of AI. You can confidently use AI for components inside the frontier (layout, basic charts) while applying extra scrutiny or manual effort to components outside the frontier (DAX measures, row-level security). This selective approach captures AI's speed benefits while protecting against its known weaknesses.
The course says to update your frontier map monthly. Why is periodic updating necessary?
The frontier is not static. AI capabilities improve over time, so tasks that were outside the frontier (like complex conditional workflows) may move inside as models get better. Your own experience also deepens, revealing new patterns about what AI handles well or poorly. The "Moving Frontier" column exists specifically to capture tasks that should be re-evaluated periodically.
Course Completion Checklist
Course Completion Checklist
Course Complete!
You have completed Platform Training. View your progress dashboard or generate your certificate.