AI Prototyping: Prompting strategies
Prompting techniques for concept exploration, co-creation, or diligent building
Prompting is becoming a core part of modern product work. Whether you’re prototyping features, exploring ideas, or shipping micro-apps, your ability to guide the AI makes all the difference. These are the prompting techniques I’ve picked up through real-world use — things that go beyond the tutorials and help get things built.
When I first started using AI prototyping tools, I thought I needed to feed them PRD-like propts, step-by-step specs, and mini design briefs. But over time, I realised: prompting is more of a design practice in itself. There’s no one-size-fits-all approach, it depends on what stage you’re in, what kind of outcome you want, and how open you are to surprises.
After a lot of experimentation (and a fair amount of messy results), I’ve identified a few effective prompting strategies that fit different use-cases, whether you use AI for creative exploration or diligent execution.
🌀 Prompting for Exploration
1. Prompting as co-creation
In early stages, I often go in with a vague idea — just a goal or a feeling I want the UI to have. Instead of trying to define everything, I tell the AI where I want to go, but not how to get there.
Surprisingly often, that leads to ideas I wouldn’t have come up with myself. It’s like designing with someone who’s seen a thousand apps and isn’t afraid to suggest something wild.
2. Embracing weirdness (aka hallucinations)
Sometimes I’ll intentionally feed it a slightly strange mix of references — like “dashboard meets a meditation app” — just to see what happens. Most of the time, it’s a bit of a mess. But occasionally, something original pops up. I treat it like sketching: quantity over perfection.
3. Using visual references
Instead of writing long-winded descriptions, I just drop in screenshots from Mobbin, Dribbble, or even photos of hand-drawn sketches. It’s faster, and AI usually understands better with visuals anyway.
🛠 Prompting for Production
4. Sketch in Figma, build in AI (and vice versa)
My workflow is pretty fluid: I’ll start rough sketches in Figma, screenshot them, and throw them into an AI tool to get a coded version. If I feel stuck or want more space to think, I go back to Figma.
This back-and-forth feels way more natural than the old “first design, then build” approach. I don’t worry too much about clean Figma layers either — screenshots are usually enough, and messy layers sometimes confuse the AI more than they help.
5. Bring your own building blocks
Lately, I’ve been using component libraries like Aceternity UI, 21st.dev, or Magic UI. If I find a nice interaction or component, I’ll just link to it in the prompt and ask the AI to adapt it. It’s a huge time-saver.
Check out here how to use your own design system.
🧠 Prompting for Function
6. Context helps — in small doses
Telling the AI what I’m building and who it’s for usually leads to better suggestions. But I’ve learned not to overload it. Too much context, especially if it’s not directly relevant, can just confuse things.
7. Research before you prompt
When working on more functional stuff, I’ll often do a quick pass on available libraries or documentation first. Then I’ll reference those in the prompt. This gives the AI something more grounded to work with — especially if the task is a bit technical.
8. Don’t try to define everything at once
This was a big mindset shift for me. I used to feel like I had to plan everything in advance. Now I break things down. I’ll build one flow or component at a time, test it, then adjust. The AI is great at responding to feedback mid-process — you don’t have to get it all right up front.
🔁 A few extra tips
Role prompts work surprisingly well — “Act as a senior product designer” or “Imagine you’re a frontend engineer” can shape the tone and quality of the result.
Sometimes I’ll ask the AI to critique its own work — “What would you improve in this?” — and it often gives useful follow-ups.
Give your data export: if you build a frontend shell based on existing data, export a sample of this data as a JSON or csv and see what the tool will make out of it. I was surprised by how well it could design a UI based on the data structure and type of data you give it. Export of data files does not work in Lovable
That’s where I’m at with prompting right now. It’s still evolving — and honestly, still kind of magical at times. The key, at least for me, has been treating prompting less like instruction writing and more like collaborative design.
Curious how others approach this — what prompting techniques have you found most useful in your AI workflows? Reach out on LinkedIn!
Design Systems ♡ Lovable, Bolt, V0 and Replit.
AI prototyping tools like Lovable, Bolt, v0, and Replit let you rapidly spin up apps, but speed isn't enough if the results don’t fit your product. Most teams aren’t starting fresh; they’re bound by existing brands, design systems, and constraints. Over the past few weeks, I tested how well these tools integrate with real-world design systems, reusing y…