Has Anyone Tried Using APOSD Red Flags with LLMs?

37 views
Skip to first unread message

Huy Truong

unread,
Mar 30, 2026, 11:22:30 AM (8 days ago) Mar 30
to software-design-book
Hello all,

I wonder if anyone has tried to incorporate John's red flags (p.183 in the 2nd version) into an LLM (like Cursor's agent) to help guardrail code generated by AI?

-----

More details:

From this thread,  A text version of this book which can be fed to LLMs for context? , I understand that you can ask an LLM to revise the codebase according to APOSD. But I don't think it does the job effectively. LLMs have context windows, which essentially means how much you can feed into the LLM in one prompt. Taking into the length of the entire book, the code base, the prompt, and the context of the previous prompts in the same chat session is simply too much for the LLM and that's why it's not effective by simply asking the agent to modify the code according to the book.

I think a better approach is to set up rules based on your understanding when reading APOSD. I have some exposure to the following red flags,
- Repetition
- Comment Repeats Code
- Vague Name
- Nonobvious Code

, so I gave it a try for my first version of the rules.

If you've tried setting up something similar in Cursor (or other IDEs), please let me know. Thank you!

-----

Below is my current User Rules in Cursor:

```
When generating or editing code, apply these code-clarity checks:

- Repetition: avoid duplicating nontrivial logic; extract shared logic only when it improves clarity.
- Comment Repeats Code: do not add comments that restate obvious code; use comments for intent, assumptions, or edge cases.
- Vague Name: prefer precise names for variables, functions, classes, and booleans; avoid generic names like `data`, `result`, `value`, `item`, `process`, `handle`, `flag`, and `status` unless the meaning is obvious from context.
- Nonobvious Code: prefer code that is understandable on a quick read, with explicit control flow and clear return values.

Examples:
- Prefer `review_texts` over `data`
- Prefer `should_retry` over `flag`
- Prefer `sentiment_label` over `result`

When useful, make the smallest clear improvement first. Avoid over-engineering.
```

Le Tran Dat

unread,
Mar 30, 2026, 12:22:56 PM (8 days ago) Mar 30
to Huy Truong, software-design-book
Hi Huy,

Please let us know if you find this approach effective. In my experience, I noticed it only seems to work if I repeatedly restate the rules. Eventually, I gave up because the model remains quite forgetful, even when I include the instructions in files like claude.md or agents.md.

Dat


--
You received this message because you are subscribed to the Google Groups "software-design-book" group.
To unsubscribe from this group and stop receiving emails from it, send an email to software-design-...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/software-design-book/b00ade2d-a94c-4c51-8068-2ba8d83bba09n%40googlegroups.com.

Huy Truong

unread,
Mar 30, 2026, 3:02:18 PM (8 days ago) Mar 30
to Le Tran Dat, software-design-book
Thanks for letting me know, Dat. Interesting. I think this is something I can ask in the Cursor forum about how rules are enforced.
Reply all
Reply to author
Forward
0 new messages