Project 01 · Live at instantpotdoc.com
Built to solve a real problem: AI that gives the right answer the first time, every time, with no drift between sessions.
About the project
Instant Pot Doc converts stovetop recipes for electric pressure cookers. You paste a recipe, it walks you through a strict sequence — cooker model, quart size, processing mode — and produces a fully adapted recipe with timing, liquid adjustments, and scaled quantities.
The fill limit logic is what separates it from a simple conversion. It calculates whether each scale option is safe for your specific pot size, flagging anything that would exceed two-thirds capacity for standard recipes or half capacity for foamy foods. Every output follows the same format, every time.
It runs as a single HTML file in any browser. Users supply their own Anthropic API key — there is no subscription, no account, and no cost beyond what the API itself charges.
Practitioner AI
Each post documents a phase of the build — published weekly as part of the six-part Practitioner AI series.
The Origin Story
How a practical experiment with recipe conversion became a lesson in AI consistency.
The Victory Lap That Wasn't
Prompt drift, ChatGPT diagnosing its own ceiling, and the pivot.
When the Tool Tells You It's the Wrong Tool
Switching to Claude, building a web app in one conversation.
Building Something Real
instantpotdoc.com goes live without a developer.
I Asked AI to QA Itself
Consistency stress test, five identical runs, drift report.
What This Means for Executive Leaders
The so what for anyone leading organizations through AI.
Behind the Build
Eleven narrative stories about specific moments in the build — written in plain language for anyone who wants to go deeper than the surface story.
01
From Conversation to System
The moment the approach changed
My first attempts at using AI to convert recipes were just conversations. I described what I wanted and got back something useful — sometimes. The problem was that every conversation started from scratch. No memory of how I liked things done. No fixed sequence. No consistency.
I asked Claude directly: why does this keep changing? The answer was immediate and specific. The model was treating each session as a new conversation with no fixed rules. The solution was to stop writing instructions and start defining a system — a strict sequence of steps where each one had to complete before the next could begin. Claude called it a state machine. I recognized it immediately from thirty years of enterprise systems work.
Within one conversation Claude had rewritten the entire prompt using that structure. The difference in behavior was immediate. Steps happened in order. Nothing was skipped. The output was the same every time.
The lesson: the technology is new. The principles are not. Define the sequence, enforce the steps, eliminate ambiguity. That is not AI expertise. That is systems thinking — and any executive who has run a complex operation already knows how to do it.
02
The Ingredient List Moment
One small change that made everything work
I had added an ingredient substitution feature. It seemed useful — suggest swaps before converting the recipe. But every time I used it I had to stop, leave the chat, find the original recipe, and remind myself what was in it.
I described the friction to Claude and asked how to fix it. The suggestion came back in seconds: show the ingredient list first. Before asking about substitutions, before doing anything else, display exactly what is in the recipe. Then ask. One additional step. No new model capability. Just better sequencing.
The output did not change. The experience changed completely. That five-minute conversation produced the single most impactful improvement in the entire tool.
The lesson: UX matters more than raw output. A brilliant result delivered in the wrong order is still a bad tool. You do not need a UX designer to figure this out. You need to use your own tool and describe what is frustrating you.
03
Rebuilding the Prompt in Claude
What happened when I switched platforms
When I moved from ChatGPT to Claude my first instinct was to reuse the prompt I had already built. I pasted the entire system prompt and asked if Claude could work with it.
Claude did not just accept it. Within minutes it had analyzed the structure, identified where the instructions were ambiguous, and proposed a complete rewrite. It explained specifically what was wrong — certain rules were too vague to enforce, the step sequence had gaps, and recovery logic was missing. Then it rewrote the entire prompt, section by section, explaining each change as it went.
The rewrite framed the whole system as a state machine — a concept I recognized immediately from enterprise IT. Instead of a list of instructions the model could interpret loosely, it became a strict sequence where each step was gated by the previous one. The drift problem disappeared.
The lesson: switching tools is not starting over. Everything you learned about the problem comes with you. The second build is always faster and cleaner than the first — because you already know what correct looks like.
04
The QA Conversation
Asking AI to test itself
After the app was live I wanted to know if I had actually solved the drift problem — or just moved it from one platform to another. I asked Claude directly: can you test this for consistency?
Claude was immediately honest about its limitations. It could not run automated scheduled tests or persist between sessions. But within the same message it proposed a workaround — run the same recipe through the system prompt five times with identical inputs in a single session, compare every output element, and produce a drift report. It laid out exactly what it would test and what it would look for before starting.
The report came back as a clean comparison table. Every critical element — timing, release method, output format order, fill level confirmation, manufacturer directions — was identical across all five runs. Minor variation appeared only in stylistic phrasing of the rationale. Nothing that affected safety or accuracy.
The lesson: AI is not just a building tool. It is a testing tool. You do not need a QA team. You need a clear definition of what correct looks like — and the willingness to ask for a check. That is a methodology any executive can apply today.
05
The Fill Limit Problem
What the original tool had that the new one was missing
Weeks into using the rebuilt app I noticed something missing. The original ChatGPT GPT had asked for the size of my Instant Pot in quarts and used that to warn me when a scaled recipe would overfill the pot. The Claude version did not have this yet.
I described what the original had done. Claude's response was immediate and specific — it did not just add a quart size question, it designed a complete fill safety system. The quart size question became a mandatory step in the startup sequence. Fill limits were defined precisely: two thirds capacity for standard recipes, half capacity for foamy foods like beans and grains. The scaling step was redesigned to calculate estimated volume at every scale factor and label each option SAFE, OVER LIMIT, or UNDER MINIMUM before the user chose. The entire change happened in one conversation.
The lesson: describe the problem precisely and AI solves it precisely. Vague requests get vague results. The more specifically you can articulate what is missing, the more complete the solution you get back.
06
Building the Web App
What a non-developer can build in an afternoon
ChatGPT had told me I needed a front end to solve the consistency problem. That sounded like a developer project. I do not write code. I have never built a web application.
I described what I wanted to Claude in plain English. No wireframes. No technical specification. A shareable tool that anyone could open in a browser. A clean landing page. My logo. My name. The full recipe conversion flow underneath. Within the same conversation — maybe forty minutes — Claude had produced a working single-file HTML application with the logo embedded, the brand colors applied, and the full state machine logic running. I downloaded one file. I dragged it onto a website. It was live.
When I wanted changes — the logo was missing, the branding needed updating, the quart size step needed adding — I described each one in plain English and got back an updated file. Every iteration took minutes not days.
The lesson: the barrier to building real AI tools is lower than most executives believe. But it requires thinking like a builder — defining requirements, specifying outputs, iterating until it works. That is not a technical skill. That is a leadership skill you already have.
07
Getting It Live
How AI walked a non-technical executive through every step of deployment
Having a working app file and having a live website are two different things. I had the file. I had never deployed anything to the web in my life.
Claude recommended two specific platforms: Namecheap for domain registration and Netlify for hosting. Not generic categories — specific tools with specific reasons. Namecheap because it was straightforward, affordable, and had a promo code that brought the domain to $6.79. Netlify because it supported drag-and-drop deployment — literally drag a file onto a webpage and your site is live.
Every step came with exact instructions. Buy the domain here. Click this button. Type this into this field. Copy these nameserver addresses. Paste them here. Add an A record with this value. Add a CNAME with this value. Wait up to two hours for propagation.
When the screen I saw did not match the instructions — which happened repeatedly — I took a screenshot and shared it directly in the conversation. Every time, within seconds, Claude identified exactly where I was, what I was looking at, and what to do next. That screenshot technique became my standard method for any technical process.
From zero knowledge of web deployment to a live branded site at a real domain: one conversation, one afternoon, $6.79.
The lesson: AI does not just answer questions. It guides processes. The combination of specific step-by-step instructions and real-time screenshot correction makes technical tasks accessible to anyone willing to share what they are actually seeing.
08
The Screenshot Method
The technique that made everything else possible
Every step-by-step guide assumes the screen you see matches the screen being described. It rarely does. Interfaces change. Options move. Buttons get renamed. What the guide says is on the left is now on the right.
I discovered early in this process that the fastest way to resolve that gap was a screenshot — shared directly into the conversation. Not a description of what I was seeing. Not a typed explanation of where I was stuck. A screenshot.
During the domain and hosting setup alone I shared more than a dozen screenshots. Each one got an immediate, specific response: you are on the wrong tab, click Advanced DNS. That button is hidden under the dropdown on the right. That error means your API key needs credits — here is exactly where to add them.
The screenshot became my standard method for any technical process throughout this entire project — deploying the app, configuring DNS, setting up LinkedIn posts, navigating Namecheap and Netlify for the first time. It works because AI can see what you see and respond to what is actually on your screen rather than what it assumes is there.
The lesson: when instructions and reality diverge, do not type a description — take a screenshot. It is the single most effective technique a non-technical person can use when working with AI on any process that involves navigating an interface.
09
The Domain Decision
Six dollars and seventy-nine cents for a credential
Once the app was working I had a choice. Share it with a free Netlify URL — something forgettable and random — or buy a real domain.
I asked Claude directly: does the domain actually matter? The answer was unambiguous. For a personal project used privately, no. For something being referenced in a professional LinkedIn series aimed at senior executives, yes. A random Netlify URL signals prototype. A real domain signals intentional. For thought leadership purposes the URL is part of the credential.
Claude recommended Namecheap, provided the exact search to run, and flagged a promo code that brought the price to $6.79 for the first year. I checked availability, added it to cart, and completed the purchase in under five minutes. The domain connection to Netlify followed the same screenshot-guided process — every step laid out, every confusion resolved in real time.
The lesson: sometimes the six-dollar decision is the most important one. Commitment changes how you treat a project — and how others perceive it.
10
What ChatGPT Told Me
The most useful conversation I had with the tool I was leaving
After weeks of trying to fix the UI drift — tightening the prompt, adding guardrails, testing and adjusting — I recognized a feeling I had experienced many times in my career. Diminishing returns. More effort, less improvement.
So I asked ChatGPT directly: is GPT the right tool for this? Can this consistency problem actually be solved? The answer was honest and specific. Even with strict prompting the model told me it was still probabilistic, context-sensitive, and slightly variable by nature. You can reduce inconsistency. You cannot guarantee it. Then: you have outgrown prompt-only control. You are trying to build a tool, not just a chatbot. You need a front end.
That answer led directly to everything that came next — the switch to Claude, the web app, the domain, the LinkedIn series. A tool that honestly diagnosed its own ceiling was more valuable in that moment than one that kept trying to be something it was not.
The lesson: ask direct questions. AI systems will often give you honest answers about their own limitations if you ask plainly. That honesty is a feature, not a flaw — and it is more useful than a confident wrong answer.
11
The First LinkedIn Post
526 impressions, 302 people reached, and two new followers from outside my network
I had been on LinkedIn for years. I had never posted original content. Not once.
Claude drafted the full post from the bullet outline we had built together, then built a pull quote graphic to accompany it — a clean image with a green bar, italic serif type, and the sharpest line from the whole story: Chat is good for answers. It is not good for consistency. The graphic was produced as an HTML file, opened in a browser, and screenshotted — because LinkedIn does not accept SVG files, which Claude flagged and solved in the same conversation.
Even the posting process required real-time help. LinkedIn collapses blank lines on paste. Bullet points from external sources disappear. Every spacing decision has to be made manually inside the editor. Claude walked through each formatting fix as I shared screenshots of what the draft looked like at each stage.
By the next morning: 777 impressions, 455 members reached, 33 reactions, 3 comments, 1 save. The audience was 26% senior level, 22% IT services and consulting, with CEOs in the mix. For a first post ever with no prior content history and no ad spend, the algorithm had picked it up and pushed it to the right people.
The lesson: AI can help you publish, not just build. From drafting to formatting to graphics to real-time troubleshooting — the same tool that built the app helped get the story of building it in front of the right audience.