Overview & Learning Objectives

By the end of this course, you will be able to:

  • Build 3 tools using centaur and cyborg work patterns on Power Platform
  • Practice deliberate mode-switching — choosing the right pattern for the right problem
  • Create your first frontier map documenting where AI works well and where it struggles
  • Apply the full EDD build process from design through verification

Before You Begin

This page assumes you completed Builder Orientation and attempted your first build. You should already be familiar with the centaur/cyborg distinction, the EDD SOP, and basic Power Platform navigation. If any of that is unfamiliar, go back and complete Builder Orientation first.

Module 1: Setup and Review

Duration: 15 minutes

Step 1: Verify Your Access

Before building anything, confirm you can reach all three platforms you will use today. Open each link and verify you can log in:

  1. SharePoint: Navigate to your unit SharePoint site and confirm you can create a new list
  2. Power Apps: Go to make.powerapps.com and verify you see the Home screen
  3. Power Automate: Go to make.powerautomate.com and verify you can create a new flow

Access Problems?

If you cannot log in to any of these platforms, contact your IT support before proceeding. You will need all three for every build in this course. Do not skip this step — resolving access issues during a build wastes significant time.

Step 2: Centaur vs. Cyborg Review

You learned these two patterns in Builder Orientation. Here is a quick reference you will use throughout the session:

Pattern How It Works Best For Key Risk
Centaur Clear phases: human designs, AI builds, human verifies High-stakes, accuracy-critical work Slower but reliable
Cyborg Continuous back-and-forth, fluid boundaries Creative, iterative, discovery work Faster but requires constant attention
Self-Check: Which mode would you use for building a leave tracking system for your section? Why?

Centaur mode is the better choice here. Leave tracking involves specific regulations, approval chains, and date calculations that must be accurate. You want to design all the business rules up front (who approves, what the limits are, what happens with overlapping requests), then let AI build it, then verify every rule works correctly. Getting leave calculations wrong has real consequences for Marines, so the slower-but-reliable centaur pattern is appropriate.

Cyborg mode could work for the dashboard or reporting layer once the core logic is verified — but the foundational workflow should be centaur.

Module 2: Build #1 — Structured Workflow (Centaur Mode)

Duration: 60 minutes

Problem: 1st Bn, 99th Marines needs a request routing workflow. Someone submits a request, it goes to the right approver based on the dollar amount, the approver acts on it, and the requester gets notified of the decision. You will build this on Power Platform using the centaur pattern.

Explicit Centaur Pattern

This build follows five distinct phases with clear boundaries between them. At each boundary, you — the human — verify before moving on.

Phase 1 — Human Designs (You Do This)

Before touching AI, define everything on paper or a whiteboard. Map out every business rule, every data flow, every edge case. This is the most important phase — the quality of your design determines the quality of the final product.

Whiteboard Planning Template — Request Routing Workflow

Business Rules
  • Requests under $500: Section Leader approves
  • Requests $500–$2000: Department Head approves
  • Requests over $2000: CO approves
  • All requests require justification memo
  • Approver has 48 hours to act or request auto-escalates
Data Flow

Requester submits via Power App form → SharePoint list → Notification to approver → Approver acts → Requester notified of decision

Notifications
  • Requester: confirmation of submission, approval/denial notification
  • Approver: new request alert, 24-hour reminder if not acted on
  • Admin: weekly summary of all requests
Edge Cases
  • Approver on leave: auto-route to alternate
  • Request withdrawn: requester can cancel before approval
  • Duplicate submission: system checks for matching requests within 24 hours

Copy or recreate this template for your own planning. The key is that you made every decision here — the AI has not been involved yet.

Phase 2 — AI Builds (Prompt Sequence)

Now feed your design to AI, one component at a time. Use these prompts in sequence. Each one builds on the previous output.

Prompt 1 — SharePoint List Design
Create a SharePoint list design for a request routing system. Columns needed: Request Title (single line text), Requester Name (person), Amount (currency), Justification (multi-line text), Status (choice: Pending/Approved/Denied/Withdrawn), Approver (person), Date Submitted (date), Date Resolved (date). Give me a CSV template I can import into SharePoint, plus which columns to add manually after import.
Prompt 2 — Power App Form
I used Integrate → Power Apps → Create an app from the SharePoint list above. The auto-generated app has an EditForm with all fields wired up. Customize it to add: validation (Amount must be > 0, Justification must be at least 20 characters), and a success notification after saving.
Prompt 3 — Power Automate Flow
I created a Power Automate flow from the SharePoint list using Integrate → Power Automate. The trigger 'When an item is created' is already wired. Add routing logic: under $500 → Section Leader, $500-$2000 → Dept Head, over $2000 → CO. Send an approval email to the right person and update the list with the decision.

Phase 3 — Human Verifies

This is the critical checkpoint. Walk through the AI output and verify it against your planning template. Do not skip any item on this list.

Verification Checklist

  • TEST: Submit a $400 request. Verify it routes to Section Leader, not Department Head.
  • TEST: If possible, let a request sit without action and verify the 48-hour reminder fires.
  • TEST: After submitting, check that the requester can view their request status in the app.
  • TEST: Submit a request, then cancel it before approval. Verify the status updates correctly.
  • Are all notification messages clear and accurate?
  • Does the duplicate check actually catch matching requests within 24 hours?

Phase 4 — AI Refines

Feed your verification findings back to AI. Be specific about what failed, what you expected, and what happened instead.

Prompt 4 — Refinement
I tested the request routing workflow and found these issues: 1. [Describe specific issue — e.g., "When I submit a $1500 request, it routes to the Section Leader instead of the Department Head"] 2. [Describe specific issue — e.g., "There is no auto-escalation after 48 hours. The flow ends after sending the initial approval email"] 3. [Describe specific issue — e.g., "The requester never receives a confirmation email after submitting"] For each issue: here is what I expected, here is what happened instead. Fix each one and explain what you changed.

Phase 5 — Human Accepts

Final walkthrough. Run through the entire workflow one more time from start to finish:

  1. Submit a test request at each dollar threshold ($400, $1200, $3000)
  2. Verify each routes to the correct approver
  3. Approve one, deny one, let one time out
  4. Confirm all notifications fire correctly
  5. Test the withdrawal path
  6. Verify the admin weekly summary includes all test requests

If everything passes, your Build #1 is complete. If not, return to Phase 4 and iterate.

Key Teaching Point

At every phase boundary, there is a verification checkpoint. The human is responsible for design and verification. The AI handles execution. This separation is what makes centaur mode reliable for critical workflows. Notice that you never let the AI decide what to build — only how to build what you already designed.

Module 3: Reflection — What Went Wrong

Duration: 15 minutes

Before moving on to your next build, take 15 minutes to reflect on what happened during Build #1. Every builder encounters AI failures — the skill is in recognizing them quickly and knowing how to recover. This structured reflection replaces what would be a group failure-sharing session during live instruction.

Do not skip this reflection. The patterns you identify here will save you hours on Build #3. Take 10 minutes to write honest answers.

Build #1 Reflection Template

What did the AI get wrong during Build #1?

(Be specific. Which component? What was the exact error?)

_______________________________________________________________

_______________________________________________________________

How did you catch it?

(During which phase? What test or review revealed the problem?)

_______________________________________________________________

_______________________________________________________________

How did you fix it?

(What did you tell the AI? Did it fix the issue on the first try?)

_______________________________________________________________

_______________________________________________________________

What would you check for next time?

(What will you add to your personal verification checklist?)

_______________________________________________________________

_______________________________________________________________

Save this reflection. You will use it when you build your frontier map in Module 6.

Self-Check: What is the most common type of error AI makes in Power Automate flows?

Error handling for nested conditions and missing edge case paths. AI is good at building the "happy path" — the flow that works when everything goes as expected. But it frequently misses error handling for nested conditions (what happens when a condition inside another condition fails?) and omits edge case paths entirely (what if the approver field is empty? what if the amount is exactly $500?).

This is why Phase 3 verification is non-negotiable. You must test the paths the AI is most likely to forget.

Module 4: Build #2 — Training Tracker (Cyborg Mode)

Duration: 60 minutes

Problem: 1st Bn, 99th Marines needs a training tracker that shows completion status across the section and generates reports for leadership. You will build this using the cyborg pattern — continuous back-and-forth with AI, discovering requirements as you go.

Target Dashboard

This is what your dashboard should look like when complete. Use this as your target while building:

Marine Rank Annual Rifle Qual CBRN Awareness Cyber Awareness PFT/CFT SAPR Training
Martinez, J.R. LCpl Complete Complete Overdue Complete Scheduled 15 Mar
Thompson, A.K. Cpl Complete Overdue Complete Complete Complete
Williams, D.M. PFC Scheduled 22 Mar Complete Complete Overdue Complete
Chen, R.L. LCpl Complete Complete Complete Complete Overdue
Davis, T.J. Sgt Complete Complete Overdue Complete Complete
Rodriguez, M.A. LCpl Overdue Complete Complete Scheduled 1 Apr Complete
Summary 67% Current | 17% Overdue | 16% Scheduled

Report Output

Your tracker should also generate this kind of summary for leadership briefs:

Report Output — S-3 Brief Format

As of 06 FEB 2026: Section maintains 67% training currency across five tracked events. Two critical overdue items require immediate action: LCpl Martinez Cyber Awareness (overdue 15 days) and Cpl Thompson CBRN Awareness (overdue 8 days). Section leader has been notified. Three events are scheduled within 30 days. Recommend prioritizing rifle qualification for PFC Williams (range date confirmed 22 Mar) and PFT/CFT for LCpl Rodriguez (scheduled 1 Apr). No systemic compliance gaps identified.

Guided Cyborg Build

Unlike the centaur build, you will not plan everything up front. Start with a rough idea and iterate. Use these prompts as your starting sequence, then continue the conversation based on what the AI produces.

Notice the difference: Unlike Build #1, you are not planning everything up front. You are discovering requirements as you iterate. This is deliberate — cyborg mode trades planning time for iteration speed. Stay engaged with every output.

Prompt 1 — Initial Structure
I need a training tracker for my Marine Corps section in 1st Bn, 99th Marines. It should track 6 Marines across 5 required training events. I need to see at a glance who is current, who is overdue, and who has events scheduled. Build this as a SharePoint list with a Power App gallery view.

Review the output. Does the data model make sense? Are the right columns included? Adjust and continue:

Prompt 2 — Visual Formatting
Add conditional formatting: green for Complete, red for Overdue, yellow for Scheduled. Add a summary bar at the top showing percentages for each status across all Marines and all events.

Check the formatting. Does it match the target dashboard above? Now add the reporting layer:

Prompt 3 — Report Generation
Generate a report view that summarizes the data in paragraph form suitable for an S-3 brief. Include: overall percentage current, critical overdue items with names and how many days overdue, and upcoming scheduled events in the next 30 days. Format it as a professional military brief.

From here, continue iterating based on what you see. You might need to adjust column types, fix formatting, add filters, or refine the report language. That is the cyborg pattern — you are discovering what works through rapid iteration.

Key Teaching Point

Cyborg mode is faster but requires constant attention. You cannot walk away and let the AI run. You are evaluating and redirecting at each step, discovering requirements as you build rather than executing a predetermined plan. Notice how different this feels from the centaur build — that difference is deliberate, and knowing when to use which pattern is one of the core skills this course develops.

Self-Check: Does your tracker match the target?

Compare your dashboard to the target table above. Verify:

  • Does it display all Marines with their training statuses?
  • Are overdue items visually distinct (red or highlighted)?
  • Does the summary show correct percentages?
  • Can you generate the S-3 brief format report?

If any are missing, revisit the prompts in this module and iterate.

Module 5: Build #3 — Your Problem

Duration: 60 minutes

Now build a solution to a real problem you face in your unit. This is the tool you identified during Builder Orientation — your actual requirement, your actual data, your actual users.

Before You Start

Answer these three questions before writing a single prompt:

  1. What are you building? (Describe it in one sentence.)
  2. Which mode will you use — centaur or cyborg — and why?
  3. Where do you expect frontier issues? (Where is AI likely to struggle with your specific problem?)

Build #3 Decomposition Template

Problem Statement

(One sentence: who has the problem, what is the problem, what does a solution look like?)

_______________________________________________________________

Mode Selection

(Centaur or Cyborg? Why?)

_______________________________________________________________

Components to Build

(Break your solution into 3-5 pieces. What platform tools will you use for each?)

  1. _______________________________________________________________
  2. _______________________________________________________________
  3. _______________________________________________________________
  4. _______________________________________________________________
  5. _______________________________________________________________
Expected Frontier Issues

(Where will AI likely struggle? What will you need to verify most carefully?)

  • _______________________________________________________________
  • _______________________________________________________________
  • _______________________________________________________________
Verification Plan

(How will you know it works? What tests will you run?)

  • _______________________________________________________________
  • _______________________________________________________________
  • _______________________________________________________________

Need a reference for what a good decomposition looks like? See the completed example in Builder Orientation Module 4.

Completion Criteria

A complete decomposition should include: a clear tool name and purpose, the selected mode (Centaur or Cyborg) with a reason, 3–6 components to build, at least 2 specific data sources, and 3+ verification criteria. If you cannot fill all fields, revisit your problem definition.

Getting Stuck?

When you hit a wall, ask yourself: "What would I tell the AI about what is going wrong?" If you can describe the problem clearly, you can prompt the AI to fix it. If you cannot describe it, that is a sign you need to step back and think through the design before asking AI for more code.

Module 6: Frontier Map Update

Duration: 15 minutes

A frontier map records what AI can and cannot do for your specific use cases. Based on everything you experienced in Builds #1 through #3, fill in or update the map below. This becomes a living reference you will use on every future build.

Task Type AI Handles Well AI Handles Poorly Verification Needed
SharePoint list schema design Column types, relationships, basic validation rules Complex calculated columns with nested IF logic Always verify column data types match actual data
Power App form layout Basic screens, navigation, standard input controls Conditional visibility with 3+ dependencies Test every show/hide rule with real data
Power Automate flows Simple approval flows, email notifications, basic conditions Error handling for nested conditions, retry logic Test all error paths, not just the happy path
Data validation formulas Required fields, format checks, simple lookups Cross-table validation, date range conflicts Check edge cases: empty fields, special characters
Dashboard/reporting Basic charts, card layouts, simple filters Complex DAX measures, row-level security Verify numbers against source data manually
User interface text/labels Button labels, headers, help text Context-sensitive error messages, domain-specific jargon Have an actual user read every message

How to use your frontier map: Update it monthly as you gain experience. Before starting any new build, consult your map to identify frontier risks early. Share it with your section so everyone benefits from your lessons learned.

Frontier Map Completion Criteria

Your frontier map should have at least 5 task types relevant to your role. Each cell should contain a specific example, not just “yes” or “no.” For example, instead of writing “handles well” under AI Handles Well, write “Generates draft SOPs from bullet points with correct formatting.”

Update this map with your own observations. Where did your experience differ from the table above? Add rows for task types specific to your Build #3 problem.

Assignment

Before Advanced Workshop

  1. Complete Build #3 to a deployable state
  2. Run through the EDD SOP QA process
  3. Document 3 failure cases with specifics: what failed, how you caught it, how you fixed it
  4. Identify one area where AI surprised you — something it did better than you expected

Bring all of this to the Advanced Workshop. Your failure cases and frontier map will be the foundation for that session.

Capstone Deliverable

Working Tool Prototype

Build one functional Power Platform tool that solves a real problem identified in your Problem Definition. Document the build process in your Development Journal with at least 3 session entries.

Knowledge Check

Knowledge Check

You need to build a leave tracking system that involves specific regulations, approval chains, and date calculations. Which work pattern is most appropriate and why?

Why does the course require you to verify access to SharePoint, Power Apps, and Power Automate before starting any build?

What is the key risk associated with cyborg mode compared to centaur mode?

In the centaur pattern for Build #1, why is it critical that the human completes the whiteboard planning phase before involving AI?

During Phase 3 (Human Verifies) of the centaur build, you discover that a $400 request routes to the Department Head instead of the Section Leader. What is the correct next step?

Why does the centaur build use a sequence of separate prompts (one for SharePoint, one for Power Apps, one for Power Automate) rather than a single prompt for the entire system?

What is the most common type of error AI makes when building Power Automate flows?

Why does the course require a structured reflection after Build #1 instead of immediately starting Build #2?

Your Build #1 reflection reveals that the AI-generated flow handled the standard approval path correctly but failed when a request was submitted with an amount of exactly $500. What does this failure tell you about AI-generated workflows?

Build #2 (Training Tracker) uses cyborg mode instead of centaur mode. What is the fundamental difference in how you discover requirements?

While building the training tracker dashboard in cyborg mode, the AI generates a summary showing "67% Current" but your manual count shows 70%. What should you do?

The training tracker needs conditional formatting: green for Complete, red for Overdue, yellow for Scheduled. After the AI applies formatting, you notice "Scheduled 15 Mar" items appear in green instead of yellow. What is the most likely cause?

Before starting Build #3, the course requires you to answer three specific questions. What is the purpose of answering "Where do you expect frontier issues?" before writing any prompts?

The Build #3 Decomposition Template asks you to break your solution into 3-5 components. Why is decomposition important for an AI-assisted build?

You are stuck on Build #3 and cannot figure out what is wrong with the AI output. According to the course guidance, what should you do?

A frontier map documents what AI can and cannot do for your use cases. Why is specificity important when filling in the "AI Handles Poorly" column?

The example frontier map shows that AI handles "basic charts, card layouts, simple filters" well for dashboards, but handles "complex DAX measures, row-level security" poorly. How should this information change your approach when building a dashboard?

The course says to update your frontier map monthly. Why is periodic updating necessary?

Course Completion Checklist

Course Completion Checklist

0% Complete