Hi Dan and the BlackParrot Team,
I am a second-year dual-degree student (Electrical Engineering and Data Science) at IIT Madras. I’ve been following the BlackParrot project and specifically the verification challenges mentioned in the GSoC ideas list.
I am currently working on a RISC-V verification automation platform and recently submitted a Q1 journal paper regarding a Binarized Neural Network (BNN) hardware accelerator I designed. My interest lies at the intersection of robust RTL verification and using ML to solve the "coverage gap."
I've spent time reviewing the community's recent discussions regarding LLMs in verification. I completely align with the philosophy that for GSoC, the core UVM testbench architecture should be handwritten to ensure deep architectural understanding.
For my proposal, I am interested in:
Manual UVM Infrastructure: Building out standard testbenches for BlackParrot units (FE/BE/ME) to ensure high-fidelity, maintainable code.
LLM-Assisted Scenario Generation: Developing tools that act as "agents" to generate complex stimulus scenarios. The goal would be to automate the population of scenarios that hit difficult corner cases and close functional coverage—addressing a major current bottleneck.
I am currently diving into the bp_common interfaces and the existing testing suite. I would love to hear if this "manual architecture + agentic stimulus" approach aligns with the team's vision for this year's GSoC.
Best regards,
Pavan
GitHub : https://github.com/HUNT-001