Overview & Learning Objectives

By the end of this course you will be able to:

  • Build a working prototype of a simple tool using AI, starting from a blank screen.
  • Apply task decomposition to break a real problem into buildable subtasks before touching any tool.
  • Debug with AI — read an error message, provide context, and iterate until the problem is resolved.

Before You Begin

This page assumes you have completed AI Fluency Fundamentals and can apply all six 201 skills: Task Decomposition, Context Assembly, Iterative Refinement, Quality Judgment, Frontier Recognition, and Failure Recovery. If you cannot recall at least four of these from memory, go back and review before continuing.

Module Est. Time 201 Skills Applied
From User to Builder 10 min read Task Decomposition
Guided Build — Equipment Tracker 45 min hands-on Task Decomposition, Context Assembly, Iterative Refinement
When Something Breaks 25 min hands-on Iterative Refinement, Quality Judgment, Failure Recovery
Your Problem — Decomposition Exercise 25 min exercise Task Decomposition, Frontier Recognition
Assignment & Path Forward 10 min read All six

Module 1: From User to Builder

Estimated time: 10 min read

Bridge from User to Builder

In AI Fluency Fundamentals you learned to manage AI as a user — writing prompts, refining outputs, and judging quality inside a single conversation. Building tools is the same set of skills applied at higher complexity. Instead of managing one conversation, you are managing an entire project through AI. The 201 skills do not change; the scope does.

Think of it this way: a user asks AI to draft an email. A builder asks AI to create the system that drafts, tracks, and sends emails. The difference is not talent — it is decomposition. You learn to see the system as parts, then build each part with the same prompting skills you already have.

The Builder Mindset: Decompose Before Building

Before you open any tool — before you type a single prompt — decompose on paper or a whiteboard. What are the components? What does the data look like? What does the user need to see? What actions do they need to take?

This is Task Decomposition applied to system design. The quality of your decomposition predicts the quality of your build. A vague plan produces a vague tool. A specific plan produces a specific tool.

Common Mistake

The number-one mistake new builders make is opening the tool first and “figuring it out as they go.” This works for simple tasks. It fails completely for anything with more than two or three moving parts. Decompose first. Always.

Self-Check: What is the first thing you should do before opening any tool?

Decompose the problem on paper. Identify the data, the user actions, and the components before writing a single prompt. If you skipped straight to “open the AI tool and start prompting,” revisit the builder mindset section above.

Module 2: Guided Build — Equipment Tracker

Estimated time: 45 min hands-on

In this module you will build a working equipment checkout tracker for 1st Bn, 99th Marines from scratch, using AI. Follow each step in order. By the end you will have a functional web application and a repeatable process you can apply to any problem.

Step 1: Decompose the Problem

The problem: “I need a way to track equipment checkout for my section.”

Before touching any tool, answer three questions on paper:

  • What data fields do I need?
  • What does the user need to do?
  • What is the simplest useful version?

Here is what that looks like filled out:

Planning Template — Equipment Tracker

Problem Statement: "I need a way to track equipment checkout for my section at 1st Bn, 99th Marines." Data Fields Needed: - Item name (e.g., "PVS-14 Night Vision") - Serial number (alphanumeric, 6-12 characters) - Person checking out (name and rank) - Date checked out - Date due back - Status (Available, Checked Out, Overdue) User Actions: 1. Check out a piece of equipment 2. Check equipment back in 3. View a list of all overdue items Simplest Useful Version: A single-page web app with a form, a table of all items, and an overdue-items view. No login, no database — browser storage is fine for a prototype.

Notice how specific this is. You have not opened a tool yet, but you already know exactly what you are asking AI to build. This is decomposition in action.

Step 2: Write Your First Prompt

Now translate your planning template into a prompt. Be specific about every field, every action, and the technology. Here is the prompt to use:

Prompt — Initial Build
I need to build an equipment checkout tracker for a Marine Corps section. Data fields: Item name, Serial number, Person checking out (name and rank), Date checked out, Date due back, Status (Available, Checked Out, Overdue). Users need to: check out equipment, check it back in, and see a list of all overdue items. Build this as a simple web application using HTML, CSS, and JavaScript. Start with just the data entry form.

Copy this prompt into your AI tool and run it. You should receive HTML, CSS, and JavaScript for a basic data entry form.

What to Expect

Your output should include a SharePoint list structure with columns for: Equipment Name (text), Serial Number (text), Status (choice: Available/Checked Out/Maintenance), Assigned To (text), Date Checked Out (date), and Expected Return (date). If your output is missing columns or has different data types, that is normal — use the refinement prompts in the next step to fix it.

Step 3: Review the Output

Before you do anything else, review what you got. Open the HTML file in a browser and check:

  • Does it have all six data fields? Item name, serial number, person (name and rank), date out, date due, status.
  • Does the form actually work? Can you type in each field and submit?
  • Does it display submitted entries? You should see a table or list of items.

Common Issues at This Stage

  • AI might omit the status field or default everything to one status.
  • The “overdue” calculation might be missing entirely — AI often handles the form but skips the business logic.
  • Date fields might not use a date picker.
  • The form might not clear after submission.

These are all normal. This is why we iterate. Note what is missing and move to Step 4.

Step 4: Iterate

Now refine. You will make two passes, each adding specific functionality. After each prompt, review the output in your browser before moving to the next one.

Refinement Pass 1: Overdue View and Check-In

Prompt — Refinement Pass 1
Add an overdue items view that highlights anything past the due date in red. Add a check-in button next to each checked-out item. When the user clicks check-in, the status should change to "Available" and the item should disappear from the overdue list if it was there.

What to Expect

After this pass, your tracker should show a separate overdue items section or visual indicator (typically red text or red background) for any item whose due date is in the past. Each checked-out item should have a button or link labeled “Check In” or similar. Clicking it should change the status back to “Available” and remove the item from the overdue view.

After running this prompt, check:

  • Do overdue items actually appear in red?
  • Does the check-in button work?
  • Does checking in an item update its status immediately?

Refinement Pass 2: Validation and Summary

Prompt — Refinement Pass 2
The serial number field should validate format (alphanumeric, 6-12 characters). Show a validation error if the format is wrong. Add a summary at the top showing total items, checked out, and overdue counts. Update the counts automatically when items are checked out or returned.

What to Expect

Your form should now show a validation error message when the serial number is outside the 6–12 alphanumeric character range. At the top of the page, you should see a summary area displaying three counts: total items, currently checked out, and overdue. These counts should update automatically when you add a new entry or check an item back in. If the counts do not update, the AI likely generated static text instead of dynamic counters — mention this in your next refinement prompt.

After running this prompt, test the validation:

  • Try entering a serial number that is too short (e.g., “AB1”). Does it reject it?
  • Try a valid serial number (e.g., “SN12345A”). Does it accept it?
  • Do the summary counts update when you add or check in items?

Step 5: Review Your Final Product

Your Finished Tracker Should Include

  • A data entry form with all six fields (item, serial, person, date out, date due, status)
  • Serial number validation (alphanumeric, 6–12 characters)
  • A table showing all equipment with current status
  • A check-in button next to each checked-out item
  • An overdue items view with red highlighting
  • A summary dashboard: total items, checked out, overdue
  • Automatic status updates when items are checked in or become overdue

Take a moment to compare your result against the original planning template from Step 1. Every requirement you wrote down before opening any tool should now exist in the finished product. If anything is missing, write one more targeted prompt to add it.

Self-Check: Does your tracker have all original requirements?

Go back to the planning template in Step 1. Check each data field, each user action, and the simplest-useful-version description against what you built. If something is missing, write a specific prompt requesting just that feature. This is iterative refinement — the same skill you practiced in AI Fluency Fundamentals, applied at a larger scale.

Self-Check: Did you complete the build?

Before moving on, verify:

  • Do you have a working HTML file you can open in a browser?
  • Can you add a new equipment entry and see it appear?
  • Does the overdue view highlight items past due in red?
  • Does the summary show correct counts?

If not, go back to Step 3 and review the AI output. Copy the error message and paste it back to the AI with context about what you expected.

Module 3: When Something Breaks

Estimated time: 25 min hands-on

Things will break. That is not a sign that you did something wrong — it is a normal part of building. This module teaches you how to debug with AI so you can recover from failures instead of quitting.

Failure Case 1: The Crash

Imagine you click the “Check Out” button on your equipment tracker and nothing happens. You open your browser’s developer console (F12 → Console tab) and see this error:

Browser Console Error
Uncaught TypeError: Cannot read properties of null (reading 'value')
    at checkOutItem (tracker.js:42)
    at HTMLButtonElement.onclick (index.html:28)

This looks intimidating, but you can read it. Walk through the debugging process:

Step 1: Read the Error Message

Break it down in plain language:

  • “Cannot read properties of null” — JavaScript tried to access something that does not exist.
  • “reading ‘value’” — Specifically, it tried to read the .value of a form field.
  • “at checkOutItem (tracker.js:42)” — The crash happened on line 42 of tracker.js, inside a function called checkOutItem.

Translation: the code is trying to get the value of a form field, but it cannot find that field on the page.

Step 2: Send the Error to AI with Context

What does "paste your code" mean? If you are using a browser-based AI tool, copy the entire HTML and JavaScript block the AI gave you. If you saved it to a file, open the file in a text editor and copy all the contents. The AI needs to see your code to find the bug.

Prompt — Debugging
I'm building an equipment tracker and getting this error: "Uncaught TypeError: Cannot read properties of null (reading 'value')". This happens when I click the "Check Out" button. Here's my code: [paste your full HTML/JS code here]. The form has fields for item name, serial number, person, date out, and date due.

Notice the structure of this debugging prompt:

  1. The exact error message (copied, not paraphrased)
  2. When it happens (clicking the Check Out button)
  3. The full code (AI needs to see it to find the mismatch)
  4. What the code should do (context about your form fields)

Step 3: Understand the Fix

The most common cause of this error: an ID mismatch between your HTML and JavaScript. For example, the HTML might use id="serialNumber" but the JavaScript looks for document.getElementById("serial-number"). AI will identify the exact mismatch and provide corrected code.

Apply the fix, save, and reload your browser. Verify that the Check Out button now works.

Key Lesson

The first time something breaks, most people quit. That is the 80%. Debugging with AI is a skill. Copy the error, provide context, and iterate. You do not need to understand every line of code — you need to understand the process: read the error, give AI context, apply the fix, verify.

Failure Case 2: The Logic Error

Not all failures produce error messages. Sometimes the code runs fine but produces the wrong result. This is harder to catch and more important to practice.

Scenario: Your overdue calculation shows items as overdue even after they were returned. An item with status “Available” still shows up highlighted in red in the overdue view. The console has no errors.

Prompt — Logic Error Debugging
My equipment tracker's overdue calculation is wrong. Items that have been returned (status is "Available") still show up in the overdue items list highlighted in red. The overdue check should only apply to items with status "Checked Out" whose due date is in the past. Here's my code: [paste code]. Can you find the logic error?

The likely cause: the overdue check compares the due date to today but does not check the status first. It flags every item whose due date has passed, regardless of whether it has been returned. The fix is a simple conditional: only flag items where status === "Checked Out" and the due date is past.

Apply the fix. Return an item and verify it disappears from the overdue list.

Self-Check: What are the three steps to debug with AI?
  1. Read the error — understand what it is telling you in plain language (or, for logic errors, describe the wrong behavior precisely).
  2. Provide context — give AI the exact error message, when it happens, your code, and what the code should do.
  3. Iterate — apply the fix, verify, and if the problem persists, send the updated code and new error back to AI.

Module 4: Your Problem — Decomposition Exercise

Estimated time: 25 min exercise

Now it is your turn. Pick a real problem from your work — something you or your section actually deals with — and decompose it using the same framework from Module 2. Before you fill in the blank template, study the completed example below.

Completed Example

Decomposition Template — Armory Sign-Out Log

Problem Statement: "I need a sign-out log for the armory that tracks who has what weapon at 1st Bn, 99th Marines." Subtask 1: Design the data model (weapon type, serial, assigned Marine, date/time out, date/time in) [Human designs] Subtask 2: Build the sign-out form with weapon selection and Marine lookup [AI builds] Subtask 3: Build the sign-in process with timestamp and condition notes [AI builds] Subtask 4: Create a real-time dashboard showing all weapons currently signed out [AI builds] Subtask 5: Verify serial number formats match actual USMC weapon serial conventions [Human verifies] Pattern: Centaur Reason: Weapons tracking is high-stakes. Human makes all design decisions and verifies all outputs. AI handles implementation but human validates every record format and business rule. Frontier Risks: - AI may not know USMC serial number formats for specific weapon systems - AI may not understand armory accountability regulations - Condition reporting categories may need SME input

Notice a few things about this example:

  • The problem statement is one sentence.
  • Each subtask is small enough to accomplish in one or two prompts.
  • Every subtask has a clear label: Human designs, AI builds, or Human verifies.
  • The centaur pattern was chosen because weapons tracking has no room for error.
  • Frontier risks identify where AI’s knowledge runs out.

Your Decomposition

Fill this out on paper, in a notes app, or in a document. Do not skip any field.

Decomposition Template — Your Problem

Problem Statement: "_______________________________________________________________" Subtask 1: _______________________________________________ [Human designs / AI builds / Human verifies] Subtask 2: _______________________________________________ [Human designs / AI builds / Human verifies] Subtask 3: _______________________________________________ [Human designs / AI builds / Human verifies] Subtask 4: _______________________________________________ [Human designs / AI builds / Human verifies] (Add more subtasks if needed. If any subtask would take more than 2-3 prompts to complete, break it down further.) Pattern: [ ] Centaur [ ] Cyborg Reason: ________________________________________________ Frontier Risks: - ________________________________________________________ - ________________________________________________________ - ________________________________________________________

Peer Review (if in a live session)

If you are working through this in a group, pair up and review each other’s decompositions. Ask: “Is any subtask too big? Is there a frontier risk you missed?” If you are working alone, review your own decomposition against these two questions before moving on.

Self-Check: Review your decomposition

Look at each subtask. Could any one of them be broken down further? A subtask is too big if it would take more than 3 AI prompts to complete, or if you cannot describe the expected output in one sentence. If a subtask feels like it would take more than two or three prompts to complete, it is too big. Split it. Also check: did you identify at least one frontier risk? Every problem has at least one area where AI’s training data will not perfectly match your real-world context.

Module 5: Assignment & Path Forward

Estimated time: 10 min read

Your Assignment

Build Your Decomposed Solution

  1. Build it. Take the decomposition you created in Module 4 and build your solution using AI, following the same process from Module 2: prompt, review, iterate.
  2. Document what worked and what did not. Keep brief notes as you go. Which prompts produced good output on the first try? Which required multiple iterations?
  3. Identify at least one failure case. Something will break or produce the wrong result. When it does, document the error, your debugging prompt, and the resolution.
  4. Bring a working (or partially working) prototype. A partial build with documented lessons is more valuable than no build at all.

How to know when you are done:

  • You can demonstrate all user actions from your decomposition
  • You have tested at least one failure case and documented it
  • Someone unfamiliar with your problem could use it without your help

Path Forward

This course introduced the builder workflow: decompose, build, debug, iterate. The courses ahead go deeper:

Next Step Duration What You Will Learn
Platform Training 4 hours Build three complete tools on Power Platform practicing centaur and cyborg work patterns.
Advanced Workshop 4 hours Map the frontier for your domain, build verification protocols, and practice group debugging.
Ongoing Contribution Continuous Build tools that serve your section, contribute to the shared toolkit, and teach others.

You have completed Builder Orientation. You know how to decompose a problem, build with AI, and debug when things break. The rest is practice.

Capstone Deliverable

Problem Definition Document

Using the Problem Definition Worksheet template, write a complete problem definition for a real workflow problem in your unit. Include: problem statement, target users, at least 3 success criteria, required functions with priorities, and data considerations.

Knowledge Check

Module 1: From User to Builder

What is the primary difference between using AI as a "user" and using AI as a "builder"?

According to Module 1, what is the number-one mistake new builders make?

The course states: "The quality of your decomposition predicts the quality of your build." What does this mean in practice?

Module 2: Equipment Tracker Guided Build

In the Equipment Tracker build, Step 1 asks you to answer three questions on paper before opening any tool. Why is this sequence important?

After running the initial build prompt, Step 3 asks you to review the output before iterating. What is the most common category of issue you should expect at this stage?

The guided build uses two refinement passes after the initial prompt. What principle from AI Fluency Fundamentals does this directly apply?

Module 3: When Something Breaks

You see this error in the browser console: "Uncaught TypeError: Cannot read properties of null (reading 'value')." In plain language, what is happening?

What are the four essential pieces of information to include in a debugging prompt to AI?

Failure Case 2 describes a "logic error" where returned items still appear in the overdue list. Why is this type of error harder to catch than a crash?

Module 4: Decomposition Framework

The decomposition template labels each subtask as "Human designs," "AI builds," or "Human verifies." In the armory sign-out log example, why was "Verify serial number formats" labeled "Human verifies" rather than "AI builds"?

The module states that a subtask is "too big" if it would take more than 2-3 prompts to complete. Why is this the threshold?

The armory sign-out log example chose the Centaur pattern. What characteristic of the problem made Centaur the right choice over Cyborg?

Module 5: Assignment and Path Forward

The assignment asks you to "identify at least one failure case" during your build. Why is documenting a failure required, not optional?

The "how to know when you are done" criteria include: "Someone unfamiliar with your problem could use it without your help." What builder skill does this criterion test?

The course describes the builder workflow as "decompose, build, debug, iterate." How does this compare to the skills learned in AI Fluency Fundamentals?

Course Completion Checklist

0% Complete