AI is good at helping you "Design It Twice"

73 views
Skip to first unread message

Tim Etler

unread,
Jun 3, 2025, 11:25:04 AMJun 3
to software-design-book
I'm a huge fan of the book, and it's really deepened my thinking about software design. I've started writing down some of my thoughts inspired by the book. One topic I really liked was the "Design it Twice" chapter. My most elegant solutions always comes from when I can re-evaluate a problem better knowing the nature of it after having taken a first attempt at designing a solution.

Of course the problem with that is that it takes a lot more time to do it that way because it takes time to try things out and experiment with different approaches. The book suggests that it doesn't necessarily take much extra time to consider a few additional designs, but I've found that I often don't realize the better design until I've evaluated an approach in more detail to find out where it's awkward or fails, because until I do that, I'm missing some understanding of the details of what the design needs to accomplish.

But I've been experimenting more with LLM tools and workflows and realized where they really shine is with prototyping, which allows you to try things out rapidly enough to actually test out concepts to figure out the more practical failings early so you can iterate with a better approach. So far in my experience with AI coding tools it's done an awful job at project structure and choosing solutions and using best practices, but what it does let you do is try thing out really quickly.

I found it's a really good tool for exploring designs at higher levels of abstraction first and to help you iterate and explore lower fidelity prototypes before you try a higher fidelity prototype. You can use AI assistance to help you research and explore the potential solution space of a problem a lot faster and consider directions you otherwise would have overlooked and quickly explore issues with the designs you're considering.

It works great with prototyping implementations too because you don't need everything to be perfect, you just need to test out a few isolated ideas, and you can test it out very rapidly. If you write your interfaces first, it does a good job at filling them out with something that works so you can do a practical test of your design and actually try your design out. My experience with it is that the actual implementation code is nowhere near production quality, but I actually see that as a good thing because it prevents you from ever putting a prototype in production and it also keeps you from getting nerd sniped and putting too much effort into something you're supposed to throw away so you can focus on the higher level abstractions first.

I wrote about it in more detail and thought it might be worth sharing with this group since it was inspired by the chapter of the book.

In general, I've been viewing AI as potentially raising every field up by a level of abstraction making software design principals much more relevant for extracting the best output from these models. Because of the nature of how it builds off context, the more knowledgeable you are about a subject, the better information you can extract from it. I find LLMs have a lot of knowledge that doesn't get exposed unless you ask the right questions at more advanced levels of detail because you need to establish a text context to get it to pull the right predictive text from the relevant domain area training set.

I've seen lots of people trying to rush straight into trying to get AI to do things instead of starting with using it as a learning and exploration tool to narrow down potential designs from higher to lower levels of abstraction. I try to use it as a tool to improve my thinking instead of trying to have it think for me.

Mark Woodworth

unread,
Jun 4, 2025, 7:46:36 AMJun 4
to Tim Etler, software-design-book
I just wanted to say that this is one of the best descriptions of how to properly use an LLM that I’ve ever read. I’ve tried to explain it in similar terms, but not as well articulated.

“ Because of the nature of how it builds off context, the more knowledgeable you are about a subject, the better information you can extract from it. I find LLMs have a lot of knowledge that doesn't get exposed unless you ask the right questions at more advanced levels of detail because you need to establish a text context to get it to pull the right predictive text from the relevant domain area training set.

I've seen lots of people trying to rush straight into trying to get AI to do things instead of starting with using it as a learning and exploration tool to narrow down potential designs from higher to lower levels of abstraction. I try to use it as a tool to improve my thinking instead of trying to have it think for me.”

As tools to aid an individual who already has some domain knowledge in a particular topic, they are indispensable thinking partners. Treated as omniscient ‘agents’, they are fairly useless but never lacking in confidence.
Reply all
Reply to author
Forward
0 new messages