Overview & Learning Objectives
By the end of this course you will be able to:
- Build a working prototype of a simple tool using AI, starting from a blank screen.
- Apply task decomposition to break a real problem into buildable subtasks before touching any tool.
- Debug with AI — read an error message, provide context, and iterate until the problem is resolved.
Before You Begin
This page assumes you have completed AI Fluency Fundamentals and can apply all six 201 skills: Task Decomposition, Context Assembly, Iterative Refinement, Quality Judgment, Frontier Recognition, and Failure Recovery. If you cannot recall at least four of these from memory, go back and review before continuing.
| Module | Est. Time | 201 Skills Applied |
|---|---|---|
| From User to Builder | 10 min read | Task Decomposition |
| Guided Build — Equipment Tracker | 45 min hands-on | Task Decomposition, Context Assembly, Iterative Refinement |
| When Something Breaks | 25 min hands-on | Iterative Refinement, Quality Judgment, Failure Recovery |
| Your Problem — Decomposition Exercise | 25 min exercise | Task Decomposition, Frontier Recognition |
| Assignment & Path Forward | 10 min read | All six |
Module 1: From User to Builder
Estimated time: 10 min read
Bridge from User to Builder
In AI Fluency Fundamentals you learned to manage AI as a user — writing prompts, refining outputs, and judging quality inside a single conversation. Building tools is the same set of skills applied at higher complexity. Instead of managing one conversation, you are managing an entire project through AI. The 201 skills do not change; the scope does.
Think of it this way: a user asks AI to draft an email. A builder asks AI to create the system that drafts, tracks, and sends emails. The difference is not talent — it is decomposition. You learn to see the system as parts, then build each part with the same prompting skills you already have.
The Builder Mindset: Decompose Before Building
Before you open any tool — before you type a single prompt — decompose on paper or a whiteboard. What are the components? What does the data look like? What does the user need to see? What actions do they need to take?
This is Task Decomposition applied to system design. The quality of your decomposition predicts the quality of your build. A vague plan produces a vague tool. A specific plan produces a specific tool.
Common Mistake
The number-one mistake new builders make is opening the tool first and “figuring it out as they go.” This works for simple tasks. It fails completely for anything with more than two or three moving parts. Decompose first. Always.
Self-Check: What is the first thing you should do before opening any tool?
Decompose the problem on paper. Identify the data, the user actions, and the components before writing a single prompt. If you skipped straight to “open the AI tool and start prompting,” revisit the builder mindset section above.
Module 2: Guided Build — Equipment Tracker
Estimated time: 45 min hands-on
In this module you will build a working equipment checkout tracker for 1st Bn, 99th Marines from scratch, using AI. Follow each step in order. By the end you will have a functional web application and a repeatable process you can apply to any problem.
Step 1: Decompose the Problem
The problem: “I need a way to track equipment checkout for my section.”
Before touching any tool, answer three questions on paper:
- What data fields do I need?
- What does the user need to do?
- What is the simplest useful version?
Here is what that looks like filled out:
Planning Template — Equipment Tracker
Notice how specific this is. You have not opened a tool yet, but you already know exactly what you are asking AI to build. This is decomposition in action.
Step 2: Write Your First Prompt
Now translate your planning template into a prompt. Be specific about every field, every action, and the technology. Here is the prompt to use:
Copy this prompt into your AI tool and run it. You should receive HTML, CSS, and JavaScript for a basic data entry form.
What to Expect
Your output should include a SharePoint list structure with columns for: Equipment Name (text), Serial Number (text), Status (choice: Available/Checked Out/Maintenance), Assigned To (text), Date Checked Out (date), and Expected Return (date). If your output is missing columns or has different data types, that is normal — use the refinement prompts in the next step to fix it.
Step 3: Review the Output
Before you do anything else, review what you got. Open the HTML file in a browser and check:
- Does it have all six data fields? Item name, serial number, person (name and rank), date out, date due, status.
- Does the form actually work? Can you type in each field and submit?
- Does it display submitted entries? You should see a table or list of items.
Common Issues at This Stage
- AI might omit the status field or default everything to one status.
- The “overdue” calculation might be missing entirely — AI often handles the form but skips the business logic.
- Date fields might not use a date picker.
- The form might not clear after submission.
These are all normal. This is why we iterate. Note what is missing and move to Step 4.
Step 4: Iterate
Now refine. You will make two passes, each adding specific functionality. After each prompt, review the output in your browser before moving to the next one.
Refinement Pass 1: Overdue View and Check-In
What to Expect
After this pass, your tracker should show a separate overdue items section or visual indicator (typically red text or red background) for any item whose due date is in the past. Each checked-out item should have a button or link labeled “Check In” or similar. Clicking it should change the status back to “Available” and remove the item from the overdue view.
After running this prompt, check:
- Do overdue items actually appear in red?
- Does the check-in button work?
- Does checking in an item update its status immediately?
Refinement Pass 2: Validation and Summary
What to Expect
Your form should now show a validation error message when the serial number is outside the 6–12 alphanumeric character range. At the top of the page, you should see a summary area displaying three counts: total items, currently checked out, and overdue. These counts should update automatically when you add a new entry or check an item back in. If the counts do not update, the AI likely generated static text instead of dynamic counters — mention this in your next refinement prompt.
After running this prompt, test the validation:
- Try entering a serial number that is too short (e.g., “AB1”). Does it reject it?
- Try a valid serial number (e.g., “SN12345A”). Does it accept it?
- Do the summary counts update when you add or check in items?
Step 5: Review Your Final Product
Your Finished Tracker Should Include
- A data entry form with all six fields (item, serial, person, date out, date due, status)
- Serial number validation (alphanumeric, 6–12 characters)
- A table showing all equipment with current status
- A check-in button next to each checked-out item
- An overdue items view with red highlighting
- A summary dashboard: total items, checked out, overdue
- Automatic status updates when items are checked in or become overdue
Take a moment to compare your result against the original planning template from Step 1. Every requirement you wrote down before opening any tool should now exist in the finished product. If anything is missing, write one more targeted prompt to add it.
Self-Check: Does your tracker have all original requirements?
Go back to the planning template in Step 1. Check each data field, each user action, and the simplest-useful-version description against what you built. If something is missing, write a specific prompt requesting just that feature. This is iterative refinement — the same skill you practiced in AI Fluency Fundamentals, applied at a larger scale.
Self-Check: Did you complete the build?
Before moving on, verify:
- Do you have a working HTML file you can open in a browser?
- Can you add a new equipment entry and see it appear?
- Does the overdue view highlight items past due in red?
- Does the summary show correct counts?
If not, go back to Step 3 and review the AI output. Copy the error message and paste it back to the AI with context about what you expected.
Module 3: When Something Breaks
Estimated time: 25 min hands-on
Things will break. That is not a sign that you did something wrong — it is a normal part of building. This module teaches you how to debug with AI so you can recover from failures instead of quitting.
Failure Case 1: The Crash
Imagine you click the “Check Out” button on your equipment tracker and nothing happens. You open your browser’s developer console (F12 → Console tab) and see this error:
Uncaught TypeError: Cannot read properties of null (reading 'value')
at checkOutItem (tracker.js:42)
at HTMLButtonElement.onclick (index.html:28)
This looks intimidating, but you can read it. Walk through the debugging process:
Step 1: Read the Error Message
Break it down in plain language:
- “Cannot read properties of null” — JavaScript tried to access something that does not exist.
- “reading ‘value’” — Specifically, it tried to read the
.valueof a form field. - “at checkOutItem (tracker.js:42)” — The crash happened on line 42 of tracker.js, inside a function called
checkOutItem.
Translation: the code is trying to get the value of a form field, but it cannot find that field on the page.
Step 2: Send the Error to AI with Context
What does "paste your code" mean? If you are using a browser-based AI tool, copy the entire HTML and JavaScript block the AI gave you. If you saved it to a file, open the file in a text editor and copy all the contents. The AI needs to see your code to find the bug.
Notice the structure of this debugging prompt:
- The exact error message (copied, not paraphrased)
- When it happens (clicking the Check Out button)
- The full code (AI needs to see it to find the mismatch)
- What the code should do (context about your form fields)
Step 3: Understand the Fix
The most common cause of this error: an ID mismatch between your HTML and
JavaScript. For example, the HTML might use id="serialNumber" but the JavaScript
looks for document.getElementById("serial-number"). AI will identify the exact
mismatch and provide corrected code.
Apply the fix, save, and reload your browser. Verify that the Check Out button now works.
Key Lesson
The first time something breaks, most people quit. That is the 80%. Debugging with AI is a skill. Copy the error, provide context, and iterate. You do not need to understand every line of code — you need to understand the process: read the error, give AI context, apply the fix, verify.
Failure Case 2: The Logic Error
Not all failures produce error messages. Sometimes the code runs fine but produces the wrong result. This is harder to catch and more important to practice.
Scenario: Your overdue calculation shows items as overdue even after they were returned. An item with status “Available” still shows up highlighted in red in the overdue view. The console has no errors.
The likely cause: the overdue check compares the due date to today but does not
check the status first. It flags every item whose due date has passed, regardless of
whether it has been returned. The fix is a simple conditional: only flag items where
status === "Checked Out" and the due date is past.
Apply the fix. Return an item and verify it disappears from the overdue list.
Self-Check: What are the three steps to debug with AI?
- Read the error — understand what it is telling you in plain language (or, for logic errors, describe the wrong behavior precisely).
- Provide context — give AI the exact error message, when it happens, your code, and what the code should do.
- Iterate — apply the fix, verify, and if the problem persists, send the updated code and new error back to AI.
Module 4: Your Problem — Decomposition Exercise
Estimated time: 25 min exercise
Now it is your turn. Pick a real problem from your work — something you or your section actually deals with — and decompose it using the same framework from Module 2. Before you fill in the blank template, study the completed example below.
Completed Example
Decomposition Template — Armory Sign-Out Log
Notice a few things about this example:
- The problem statement is one sentence.
- Each subtask is small enough to accomplish in one or two prompts.
- Every subtask has a clear label: Human designs, AI builds, or Human verifies.
- The centaur pattern was chosen because weapons tracking has no room for error.
- Frontier risks identify where AI’s knowledge runs out.
Your Decomposition
Fill this out on paper, in a notes app, or in a document. Do not skip any field.
Decomposition Template — Your Problem
Peer Review (if in a live session)
If you are working through this in a group, pair up and review each other’s decompositions. Ask: “Is any subtask too big? Is there a frontier risk you missed?” If you are working alone, review your own decomposition against these two questions before moving on.
Self-Check: Review your decomposition
Look at each subtask. Could any one of them be broken down further? A subtask is too big if it would take more than 3 AI prompts to complete, or if you cannot describe the expected output in one sentence. If a subtask feels like it would take more than two or three prompts to complete, it is too big. Split it. Also check: did you identify at least one frontier risk? Every problem has at least one area where AI’s training data will not perfectly match your real-world context.
Module 5: Assignment & Path Forward
Estimated time: 10 min read
Your Assignment
Build Your Decomposed Solution
- Build it. Take the decomposition you created in Module 4 and build your solution using AI, following the same process from Module 2: prompt, review, iterate.
- Document what worked and what did not. Keep brief notes as you go. Which prompts produced good output on the first try? Which required multiple iterations?
- Identify at least one failure case. Something will break or produce the wrong result. When it does, document the error, your debugging prompt, and the resolution.
- Bring a working (or partially working) prototype. A partial build with documented lessons is more valuable than no build at all.
How to know when you are done:
- You can demonstrate all user actions from your decomposition
- You have tested at least one failure case and documented it
- Someone unfamiliar with your problem could use it without your help
Path Forward
This course introduced the builder workflow: decompose, build, debug, iterate. The courses ahead go deeper:
| Next Step | Duration | What You Will Learn |
|---|---|---|
| Platform Training | 4 hours | Build three complete tools on Power Platform practicing centaur and cyborg work patterns. |
| Advanced Workshop | 4 hours | Map the frontier for your domain, build verification protocols, and practice group debugging. |
| Ongoing Contribution | Continuous | Build tools that serve your section, contribute to the shared toolkit, and teach others. |
You have completed Builder Orientation. You know how to decompose a problem, build with AI, and debug when things break. The rest is practice.
Capstone Deliverable
Problem Definition Document
Using the Problem Definition Worksheet template, write a complete problem definition for a real workflow problem in your unit. Include: problem statement, target users, at least 3 success criteria, required functions with priorities, and data considerations.
Knowledge Check
Module 1: From User to Builder
What is the primary difference between using AI as a "user" and using AI as a "builder"?
The difference is scope, not talent. A user asks AI to draft an email; a builder asks AI to create the system that drafts, tracks, and sends emails. Building applies the same 201 skills (context assembly, iterative refinement, etc.) at a higher level of complexity through decomposition.
According to Module 1, what is the number-one mistake new builders make?
The most common mistake is jumping straight into the tool without decomposing first. This approach works for simple tasks but fails completely for anything with more than two or three moving parts. The builder mindset requires you to identify data, user actions, and components on paper before writing a single prompt.
The course states: "The quality of your decomposition predicts the quality of your build." What does this mean in practice?
Decomposition is Task Decomposition applied to system design. When you clearly identify data fields, user actions, and the simplest useful version before building, each prompt to AI is specific and targeted. Vague plans lead to vague prompts, which lead to generic output that does not solve your actual problem.
Module 2: Equipment Tracker Guided Build
In the Equipment Tracker build, Step 1 asks you to answer three questions on paper before opening any tool. Why is this sequence important?
The three planning questions (what data fields, what user actions, what is the simplest useful version) translate directly into a specific prompt. The Equipment Tracker prompt included every data field, every user action, and the technology choice because the planning template identified them first. Without this planning, you would write a vague prompt like "build me a tracker" and get generic output.
After running the initial build prompt, Step 3 asks you to review the output before iterating. What is the most common category of issue you should expect at this stage?
AI commonly handles the form structure but skips business logic. The module specifically warns that the overdue calculation may be missing entirely, the status field might default to a single value, date fields might not use pickers, and the form might not clear after submission. These are normal first-draft gaps that iterative refinement addresses.
The guided build uses two refinement passes after the initial prompt. What principle from AI Fluency Fundamentals does this directly apply?
The two refinement passes are Iterative Refinement in action. Pass 1 adds the overdue view and check-in functionality. Pass 2 adds validation and a summary dashboard. Each pass targets specific missing functionality with precise instructions, just like the weekend safety brief exercise in AI Fluency where three passes took a 70% draft to 95%.
Module 3: When Something Breaks
You see this error in the browser console: "Uncaught TypeError: Cannot read properties of null (reading 'value')." In plain language, what is happening?
"Cannot read properties of null" means JavaScript tried to access something that does not exist. "Reading 'value'" means it specifically tried to read a form field's value. The most common cause is an ID mismatch: the HTML uses one name (like "serialNumber") but the JavaScript looks for a different one (like "serial-number"). You do not need to be a programmer to read this; you need to break it into parts.
What are the four essential pieces of information to include in a debugging prompt to AI?
An effective debugging prompt includes: (1) the exact error message, copied verbatim rather than paraphrased; (2) when it happens, such as "when I click the Check Out button"; (3) the full code so AI can find the mismatch; and (4) what the code should do, providing context about your intended behavior. This structure gives AI everything it needs to locate and fix the problem.
Failure Case 2 describes a "logic error" where returned items still appear in the overdue list. Why is this type of error harder to catch than a crash?
Logic errors produce no console errors. The code runs fine but produces the wrong result. In this case, the overdue check compared the due date to today but did not also check whether the item had been returned. The only way to catch this is to test the actual behavior: return an item and verify it disappears from the overdue list. This is why Quality Judgment matters during building, not just during document review.
Module 4: Decomposition Framework
The decomposition template labels each subtask as "Human designs," "AI builds," or "Human verifies." In the armory sign-out log example, why was "Verify serial number formats" labeled "Human verifies" rather than "AI builds"?
This is a frontier risk in action. AI may not know the actual serial number formats for specific USMC weapon systems. The frontier risks section explicitly flags this: "AI may not know USMC serial number formats for specific weapon systems." A human with domain knowledge must verify that the format rules match reality, because AI will generate plausible-looking but potentially wrong validation rules.
The module states that a subtask is "too big" if it would take more than 2-3 prompts to complete. Why is this the threshold?
Small subtasks have clear expected outputs that you can describe in one sentence and verify immediately. If a subtask requires many prompts, it means the scope is too broad for you to judge whether AI succeeded. Breaking it smaller lets you apply Quality Judgment at each step, catching errors early instead of discovering them in a tangled final product.
The armory sign-out log example chose the Centaur pattern. What characteristic of the problem made Centaur the right choice over Cyborg?
The example explicitly states: "Weapons tracking is high-stakes. Human makes all design decisions and verifies all outputs." Centaur mode provides clear division of labor with sequential handoffs, which is essential when accuracy is non-negotiable. Cyborg mode blurs the boundary between human and AI contributions, which is appropriate for lower-stakes iterative work but not for accountability-sensitive systems.
Module 5: Assignment and Path Forward
The assignment asks you to "identify at least one failure case" during your build. Why is documenting a failure required, not optional?
Module 3 established that things will break and that is normal. The assignment requires documenting a failure because debugging with AI is a core builder skill. Recording the error, your debugging prompt, and the resolution reinforces the process (read the error, provide context, iterate) and proves you can recover from failures instead of quitting.
The "how to know when you are done" criteria include: "Someone unfamiliar with your problem could use it without your help." What builder skill does this criterion test?
Quality Judgment at the builder level means evaluating not just whether the code runs, but whether the tool is usable, complete, and appropriate for its intended audience. If an unfamiliar user cannot figure out how to use the tool without your help, it is not done yet regardless of whether the code works. This is the same skill from AI Fluency applied at a system level.
The course describes the builder workflow as "decompose, build, debug, iterate." How does this compare to the skills learned in AI Fluency Fundamentals?
Building is not a new discipline; it is the same 201 skills at higher complexity. Task Decomposition breaks the project into buildable pieces. Context Assembly produces specific prompts. Iterative Refinement improves each piece. Quality Judgment evaluates the output. Frontier Recognition identifies where AI will need human help. The module overview table maps each module to the specific 201 skills it applies.
Course Completion Checklist
Course Complete!
You have completed Builder Orientation. View your progress dashboard or generate your certificate.