Re: Quick question beyond GSoC scope

19 views
Skip to first unread message

Dan Petrisko

unread,
Mar 9, 2026, 6:36:48 PM (6 days ago) Mar 9
to Aman Sharma, black-parrot
Hi Aman,

My original philosophy of testing was that testbenches are very "expensive" to maintain, so having a few testbenches with a ton of tests is the right approach. This is why 1 TB was originally allocated for FE, BE, ME and TOP. Over time, I've softened on this: unit tests are valuable for raising code coverage while integration tests work better as a nightly regression. I still believe that building testbenches along well-defined interfaces is the best way to avoid accidental decoupling.

My understanding of LLM capabilities here have changed over the last year. A year ago, I would have said it's unlikely to actually speed up the debug process. Now, I believe they may be useful in two distinct usages:
- Spinning up an isolated testbench for a specific issue. I have heard reports of agents being presented a bug, generating a testbench, and reading dumps to find bugs. This seems a great use case as we throw away the code after identifying the bug.
- Generating scenarios for existing UVM-style testbenches. This is where I would like to explore after the GSoC period. Once the testbenches are solid and industry standard, I believe LLMs could generate code specifically for exposing certain behaviors, enhancing coverage etc.

I still don't trust the LLM to generate the testbench itself. And as GSoC is primarily for learning, I expect the student to handwrite it.

> From your experience — is manual testbench writing actually the bottleneck, or is the harder problem somewhere else in the verification flow?

So to answer this more directly, testbench maintenance and manipulation are the current bottlenecks. Populating the scenarios in an efficient way that exposes the most coverage is still difficult and unsolved.

Good questions, and I don't claim to be the authority on these issues so I welcome being proved wrong by useful tools!

Best,
-Dan



On Mon, Mar 9, 2026 at 10:22 AM Aman Sharma <aman1...@gmail.com> wrote:
Hi Dan,

Really enjoying diving into the BlackParrot codebase — the verification architecture is already teaching me a lot.

I had a question slightly beyond the GSoC scope, and you seemed like the right person to ask.

I've been thinking about how much time verification takes in chip development — writing testbenches, debugging waveforms, achieving coverage. I'm exploring whether LLMs could meaningfully help with testbench generation from specs, and whether that's a real pain point for teams like yours or just a problem that sounds good on paper.

From your experience — is manual testbench writing actually the bottleneck, or is the harder problem somewhere else in the verification flow?

No agenda here, just trying to learn from someone who's done this at scale.

Thanks,
Aman

--
You received this message because you are subscribed to the Google Groups "black-parrot" group.
To unsubscribe from this group and stop receiving emails from it, send an email to black-parrot...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/black-parrot/ec484ea8-c7b0-4e40-ab3f-ff4bd6b2a9e0n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages