Three months ago, I had 10+ AI design tools in my workflow. Today I use five at most.

The ones I ditched weren't bad tools. They just solved problems I didn't have. Or worse, they created new problems while claiming to solve old ones.

Here's what I learned testing 15 AI design tools across multiple projects: the distinction that matters isn't which tool is best. It's knowing when to use tools for exploration versus execution. Mix those up and you waste time.

Most designers are mixing them up.

For exploration

Claude / ChatGPT for finding blind spots.

How I use it: Paste in a rough user flow and ask what I'm missing. "Here's how I think users will move through this flow. What friction am I not seeing?"

It won't give you the answer. But it points to areas you should explore more. Questions you forgot to ask… edge cases you didn't consider.

I've found Claude better for working through copy and explanations. ChatGPT better for complex flows and structured thinking. Your mileage may vary.

NotebookLM when I need to understand a new domain fast.

NotebookLM is Google's research assistant. Upload documents and it builds a knowledge base you can query.

Designing for a space I don't know? Upload research papers, competitor analyses, industry reports. Ask it to explain the domain model.

"What are the key concepts in supply chain management? How do practitioners think about this differently than I do?"

It builds a mental model from the sources you give it. Not generic ChatGPT knowledge. Specific to the documents you're working with.

Used it recently when we were designing for a vertical I knew little about. Three hours with NotebookLM got me conversational enough to ask smart questions in user interviews. It's free, and it saves me more time than tools I pay $60/month for.

Doesn't replace talking to real users. But it gets you ready to have better conversations with them.

ChatGPT for research synthesis

After user interviews, I dump the transcripts into a GPT I’ve trained. "Find the key themes. What are people actually saying versus what I think they're saying?"

It surfaces patterns I missed. Catches when three different people described the same problem using completely different words.

Not a replacement for doing the analysis yourself. But it's faster at the first pass. I spend my time on interpretation, not just categorization.

Still read every transcript. Still do the real work. But I get to insights faster.

Lovable when I need quick visual validation.

Lovable is a rapid prototyping tool that generates working interfaces from prompts.

Some design decisions don't make sense until you see them. "Should this be tabs or a dropdown? Should the CTA be a button or inline text?"

Build it in Lovable in 20 minutes. Look at it. The answer is obvious.

Not for complex interactions. For quick "does this layout actually work" questions. When a complex prototype feels like overkill but you need more than a sketch.

The code? Throwaway. The decision? Solid.

Replit when the interaction is the design.

Replit is a browser-based coding environment. Unlike Lovable, it handles complex logic and real state management.

Complex filters with real-time updates. Multi-step states with conditional logic. Anything where the interaction behavior matters more than the visual design.

Built a search interface with faceted filters in Replit. Needed to feel how the interaction worked. Does the count update as you filter? Do filters disappear when they have zero results? What happens when you clear all filters?

Figma can't answer those questions. You need working state management.

Took two hours to build. Tested it with five people. Found major interaction problems. Fixed them. That prototype became the spec for starting work.

Use Lovable when you need to see it. Use Replit when you need to use it.

Midjourney rarely, for presentation decks.

I'm a product designer. Not making marketing materials. But sometimes you need an image for a pitch deck that conveys a feeling.

"Abstract representation of data flowing through systems" - get something that looks good enough for slide 3. Move on.

That's it. Don't use it for product work. The consistency problem is real and unsolved.

What I've ditched

Figma Make looked impressive in the Config demo. Generated layouts with no depth.

Tried it on a dashboard design. It gave me a grid of cards with perfect spacing. Zero information hierarchy. No understanding of what data matters most. No consideration of user context.

I spent more time fixing its assumptions than I would've spent just designing it.

Maybe useful if you've never designed a dashboard. If you have, it makes you slower.

This became the pattern I kept seeing. Tools optimized for demos, not real workflows. They handle the happy path beautifully. Edge cases, complex states, actual user data…they fall apart. The demos look finished because they skip everything that makes design hard.

Most Figma AI plugins solve problems I don't have.

"AI-powered layout suggestions" - I know how to make a layout.
"Smart component generation" - Breaks my design system more than it helps.
"Auto-populated content" - Lorem ipsum with extra steps.

What I noticed: they work great in Twitter demos with simple examples. Break immediately on your actual project with real constraints and edge cases.

The only plugin I kept: Remove.bg. Does one thing. Does it well. Saves me 30 seconds per image. That's enough.

AI image generators for production work.

Fine for exploration or deck images. Terrible for product UI.

Tried using it for feature illustrations. Each one looked different. Different style. Different lighting. Different level of detail.

You can't maintain brand consistency with generative images. Not yet, at least.

The surprising finding

The most useful AI tool in my workflow costs $0. NotebookLM saves me more time than tools I'm paying $60/month for.

Why? Because understanding the problem is harder than generating solutions. Most AI tools help you make things faster. NotebookLM helps you know what to make.

That's the difference.

The workflow shift that matters

Before: Sketch ideas. Design visual solutions in Figma. Build click-through prototypes. Hand-wave around the real interactions during user testing. Get incomplete feedback because users can't actually use it. Start over.

Now: Spend three hours building rough prototypes of three different approaches in Replit. Test all three with five users. Learn which interaction model works. Then design it properly.

Heres a concrete example: Redesigning a search interface with faceted filters. In the old workflow, I would've spent two days on high-fidelity Figma mocks. Instead, I built a working prototype in Replit in three hours. Tested it. Found two major interaction problems within the first five minutes of user testing.

Fixed them. Tested again. Then designed the final version.

Total time: Same eight hours. But I validated the hard par, the interaction model before investing in visual polish.

The time I save on throwaway prototypes goes into understanding the problem. That's where good design lives.

The questions I ask before adding any tool

Before I add any tool now, I run it through three filters:

Does this help me think or does it think for me?

If it's helping me explore faster, good. If it's making decisions I should be making, bad.

Can I learn from the output or just use it?

Replit prototypes teach me what works. Figma Make output teaches me nothing.

Does this solve a problem I actually have?

Most AI tools solve problems that sound good in a pitch but don't exist in real workflows.

I don't need AI to suggest layouts. I need AI to help me understand complex user behavior. Big difference.

What I'm watching

AI for design systems is interesting in theory. A tool that knows your design system and helps maintain consistency across a large team.

Different from Figma Make, which generates new designs. I'm talking about tools that help you maintain existing systems. Use the right component. Apply the right token. Catch inconsistencies before they ship.

Some design tools are exploring this. The promise: generate a button and it matches your design system automatically. Update a color and it propagates correctly.

In practice? They don't understand the why behind your system. Don't know when to break the rules. Can't handle the nuance of "use primary button here but secondary button there."

But the problem is real. Large teams struggle with design system consistency. If someone builds AI that actually understands design intent, not just component libraries, that's worth paying for.

Cursor for designers who code. Several people I trust swear by it. The pitch: understands your entire codebase, not just the file you're working in.

I don't write enough production code to justify it. But I'm paying attention.

The threat nobody talks about

AI won't replace designers. But designers who use AI to skip hard problems will get replaced by designers who use AI to solve them faster.

The skill that matters is judgment. Knowing which of ten AI-generated options is right. Understanding why it's right. Being able to articulate that to your team.

AI makes it easier to generate options. It makes it harder to know which one matters.

That gap is getting wider, not narrower.

What actually works

Use AI for exploration. Make five prototypes instead of one. Test them all. Learn faster.

Don't use AI for execution. The code isn't good enough. The designs aren't thoughtful enough. The judgment isn't there.

The designers winning right now are the ones who prototype fast and think deeply.

AI helps with the first part. You're on your own for the second.

What AI tools have actually changed your workflow? LMK. I read everything.

— Rob

Keep Reading

No posts found