GSoC 2026: UVM Testbench Development & Coverage Exploration via LLM Agents

10 views
Skip to first unread message

VAKKALAGADDA TANUSH PAVAN

unread,
Mar 12, 2026, 1:53:19 PM (4 days ago) Mar 12
to black-...@googlegroups.com

Hi Dan and the BlackParrot Team,

I am a second-year dual-degree student (Electrical Engineering and Data Science) at IIT Madras. I’ve been following the BlackParrot project and specifically the verification challenges mentioned in the GSoC ideas list.

I am currently working on a RISC-V verification automation platform and recently submitted a Q1 journal paper regarding a Binarized Neural Network (BNN) hardware accelerator I designed. My interest lies at the intersection of robust RTL verification and using ML to solve the "coverage gap."

I've spent time reviewing the community's recent discussions regarding LLMs in verification. I completely align with the philosophy that for GSoC, the core UVM testbench architecture should be handwritten to ensure deep architectural understanding.

For my proposal, I am interested in:

  1. Manual UVM Infrastructure: Building out standard testbenches for BlackParrot units (FE/BE/ME) to ensure high-fidelity, maintainable code.

  2. LLM-Assisted Scenario Generation: Developing tools that act as "agents" to generate complex stimulus scenarios. The goal would be to automate the population of scenarios that hit difficult corner cases and close functional coverage—addressing a major current bottleneck.

I am currently diving into the bp_common interfaces and the existing testing suite. I would love to hear if this "manual architecture + agentic stimulus" approach aligns with the team's vision for this year's GSoC.

Best regards,

Pavan
GitHub : https://github.com/HUNT-001 

Reply all
Reply to author
Forward
0 new messages