Ideas for better UVM?

142 views
Skip to first unread message

Puneet Goel

unread,
Jan 27, 2017, 4:12:34 AM1/27/17
to Freecellera
Greetings

I hear a lot of chatter on the web about UVM being not good enough. Unfortunately, it seems the industry would be stuck with UVM for a while given the ongoing effort to create an IEEE standard around it. While I am sure, the standardization committee would be working hard on making UVM better, the standardization effort itself is essentially opaque to most of us. And I felt disappointed after learning a little about the new UVM "features" being envisaged, from the new book on UVM by Srivatsa Vasudevan https://www.amazon.com/Practical-UVM-Srivatsa-Vasudevan/dp/0997789603 .

For example, a paper by title "The Universal Translator" was presented at DVCon 2014 by David Cornfield of AppliedMicro. That presentation by David created a lot of buzz at the conference and a lot of people (including BigEDA guys) agreed that it is an idea worth implementing in UVM. But unfortunately, I do not see any movement in that direction.

What do you guys say? I would be interested in knowing any other ideas that you guys might have come across. I am aware of the svlib and svunit like libraries, but those are peripheral to UVM. I am more interested in ideas that should be implemented in UVM to make it a better UVM.

Regards
- Puneet


Puneet Goel

unread,
Jan 27, 2017, 4:14:03 AM1/27/17
to Freecellera
Here is the link to "The Universal Translator" paper by David. Forgot to mention it in my original post.

Bryan Murdock

unread,
Jan 27, 2017, 10:05:34 AM1/27/17
to Puneet Goel, Freecellera
Most of the UVM code simply implements design patterns straight from
the Gang of Four book. If we used a dynamic language that had those
design patterns baked into the language (e.g., Python, Perl, Ruby,
Javascript, etc.) we wouldn't need so much framework and boiler-plate
code in order to solve the problems that those design patterns solve.
Not needing to spend time implementing GoF design patterns would free
up a lot of time and energy for solving more interesting problems.

I have a more detailed write-up on this on my employer's internal
wiki. Let me see if I can make it public.

I guess what I'm saying is, to make a better UVM, use a language more
matched to the problem. Excessive use of design patterns shows that
the language isn't matched to the problem.

Bryan
> --
> You received this message because you are subscribed to the Google Groups
> "Freecellera" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to freecellera...@googlegroups.com.
> To post to this group, send email to freec...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/freecellera/f553906f-eb0d-445d-bb60-4f9cdf6bb20f%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.

Erik Jessen

unread,
Jan 27, 2017, 11:37:50 AM1/27/17
to Bryan Murdock, Puneet Goel, Freecellera
In the end, the reason people use languages/standards like UVM is productivity.
The highest form of that is reuse.

So: whatever you do, it needs to enable IP creation and reuse more than all other existing standards.
And of course, it has to enable re-using existing IP created in those standards.

So: here are some ideas:
1) The .NET library is huge, it has an open-source equivalent called Mono.  There's of course a huge number of programmers who know .NET/Mono, who are available, competent and far cheaper than HW designers.  So add a Mono interface onto SV and VHDL.  Then more verification can be done by those SW engineers.
2) Create a big package of standard tools for the checkout/edit/regress/merge/checkin cycle, using open-source tools: sveditor, Jenkins, vunit, svunit, and include any open-source tools/standards for tracking defects, functional-coverage results, etc.  This doesn't sound sexy, but if your goal is to help people, providing a set of tools that are setup to work together, and have a nice configuration-front-end, you'll be a big help.  Have hooks for both Git and SVN.  Have built-in support for branches, so it's easy to do them.
3) Create a set of IP-XACT vendor extensions that support UVM, so that UVM info can be captured.  Example: in IP-XACT it's all set to capture RTL info, and has support for capturing script flows, etc.  What's missing is the ability to say: "BusDef AXI, version 1.5, is implemented for using UVM with package axi_1p5_pkg, and SV interface axi_1p5_if.sv, etc.  And then describes how to wire up the UVM-SV Interface to the bus.  For predictors, the equivalent would be to describe each of the ports (ae/ap) on the predictor, the type of transaction each uses.  This would be part of the IP-XACT info stored for the RTL IP.  Then one could generate a UVM testbench for the RTL, using whatever UVM testbench approach one wanted.  Or, let's say that one had two incompatible predictors (two different vendors), one could have the scripts generate a subscriber to translate between the two transaction-types.
4) take one of the existing graph-capture tools and enhance it so it captures the required info for designing a UVM testbench: agents/predictors/scoreboards.  I'd suggest using IP-XACT to store the info about those UVM objects (see above) so one could allow users to create valid testbenches.  Alternatively, use an existing schematic-capture tool: old-school "buses" in schematic-capture tools are equivalent to UVM ap/ae.  So not a lot of work there.

5) If Icarus Verilog (just to pick on a tool ) were enhanced to have DPI, then one could do all kinds of object-oriented verification using SystemC/C#/Java/Ruby or whatever you want.  This greatly expands the pool of available, talented implementors.  And of course it would all be *open source*.  The only thing needed would be CPU-cycles, and the prices for those are always going down.  Yes, Icarus is dramatically slower than vendor tools - but a hundred Icarus licenses in parallel, running a regression suite, is going to finish faster than a few vendor simulator seats running the same suite.  And be far cheaper.

6) Take a look at the new Eclipse open-source work for capturing embedded-software requirements.  So far, everything I've seen says that one could take the generated code and co-simulate with the RTL.  Now, if that RTL were described in SystemC, no simulator-tool would be required.  And those same tools, if one added a symbol that indicated a register, could be used to actually graphically capture RTL implementation.

I'd not "take on" UVM - I'd leverage off of it.  Fix the problems that UVM doesn't address - and never will.

Just some thoughts...

Regards,
Erik


On Fri, Jan 27, 2017 at 7:05 AM, Bryan Murdock <bmur...@gmail.com> wrote:
Most of the UVM code simply implements design patterns straight from
the Gang of Four book.  If we used a dynamic language that had those
design patterns baked into the language (e.g., Python, Perl, Ruby,
Javascript, etc.) we wouldn't need so much framework and boiler-plate
code in order to solve the problems that those design patterns solve.
Not needing to spend time implementing GoF design patterns would free
up a lot of time and energy for solving more interesting problems.

I have a more detailed write-up on this on my employer's internal
wiki.  Let me see if I can make it public.

I guess what I'm saying is, to make a better UVM, use a language more
matched to the problem.  Excessive use of design patterns shows that
the language isn't matched to the problem.

Bryan

On Fri, Jan 27, 2017 at 2:14 AM, Puneet Goel <pun...@coverify.com> wrote:
> Here is the link to "The Universal Translator" paper by David. Forgot to
> mention it in my original post.
>
> https://dvcon-europe.org/sites/dvcon-europe.org/files/archive/2014/proceedings/T2_3_paper.pdf
>
> --
> You received this message because you are subscribed to the Google Groups
> "Freecellera" group.
> To unsubscribe from this group and stop receiving emails from it, send an

> To post to this group, send email to freec...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/freecellera/f553906f-eb0d-445d-bb60-4f9cdf6bb20f%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Freecellera" group.
To unsubscribe from this group and stop receiving emails from it, send an email to freecellera+unsubscribe@googlegroups.com.

To post to this group, send email to freec...@googlegroups.com.

Puneet Goel

unread,
Jan 27, 2017, 12:18:52 PM1/27/17
to Bryan Murdock, Freecellera

Most of the UVM code simply implements design patterns straight from
the Gang of Four book.  If we used a dynamic language that had those
design patterns baked into the language (e.g., Python, Perl, Ruby,
Javascript, etc.) we wouldn't need so much framework and boiler-plate
code in order to solve the problems that those design patterns solve.

I think I can partly relate to you here. Dynamic typing could have resulted in a lot more code reuse.

IMHO, SV complicates the situation further since class interfaces have not been put to use yet. SV also lacks function/operator overloading and as a result generic and container libraries are impossible to implement. I think UVM tends to overuse template and strategy patterns to cover up for lack of function/operator overloading.
 
Not needing to spend time implementing GoF design patterns would free
up a lot of time and energy for solving more interesting problems.

I have a more detailed write-up on this on my employer's internal
wiki.  Let me see if I can make it public.

We would be looking forward to your contribution.
 

I guess what I'm saying is, to make a better UVM, use a language more
matched to the problem.  Excessive use of design patterns shows that
the language isn't matched to the problem.

While use of a dynamic language could simplify a lot of verification infrastructure, it would also make the testbenches a tad slower. I see a lot of people wanting to reuse the same verification infrastructure for emulation as well.

Erik Jessen

unread,
Jan 27, 2017, 12:29:19 PM1/27/17
to Puneet Goel, Bryan Murdock, Freecellera
My personal observation is that: the bigger the project, the more tightly-typed the language needs to be; the most difficult bugs are when some automatic type-casting (or value rounding/extending) isn't *exactly* what was planned for. It's far better to use a tightly-typed language and explicit casting than to use loosly-typed and then have fails at system integration.  And when the bug is actually inside vendor code, and is run-time only, there's no way you can ship your proprietary design to them.

--
You received this message because you are subscribed to the Google Groups "Freecellera" group.
To unsubscribe from this group and stop receiving emails from it, send an email to freecellera+unsubscribe@googlegroups.com.

To post to this group, send email to freec...@googlegroups.com.

Puneet Goel

unread,
Jan 27, 2017, 12:55:03 PM1/27/17
to Erik Jessen, Bryan Murdock, Freecellera
On Fri, Jan 27, 2017 at 10:07 PM, Erik Jessen <nbje...@gmail.com> wrote:
5) If Icarus Verilog (just to pick on a tool ) were enhanced to have DPI, then one could do all kinds of object-oriented verification using SystemC/C#/Java/Ruby or whatever you want.  This greatly expands the pool of available, talented implementors.  And of course it would all be *open source*.  The only thing needed would be CPU-cycles, and the prices for those are always going down.  Yes, Icarus is dramatically slower than vendor tools - but a hundred Icarus licenses in parallel, running a regression suite, is going to finish faster than a few vendor simulator seats running the same suite.  And be far cheaper.


I see three free options when it comes to Verilog -- and there may be more that I am not aware of -- Icarus, Vivado and CVC. Of these CVC and Vivado support DPI.

On the other hand Icarus supports only VPI/PLI. But my take is that since we need to transfer a BFM transaction to the simulator -- that too only once a clock cycle VPI also works well. We use Icarus VPI interface for communicating a transaction to Verilog from Vlang testbenches. It works well and we do not see any efficiency issues.

 
6) Take a look at the new Eclipse open-source work for capturing embedded-software requirements.  So far, everything I've seen says that one could take the generated code and co-simulate with the RTL.  Now, if that RTL were described in SystemC, no simulator-tool would be required.  And those same tools, if one added a symbol that indicated a register, could be used to actually graphically capture RTL implementation.

How well do the synthesis tools support RTL subset of Systemc? Does Vivado support SystemC RTL synthesis?

An LLVM based SystemC RTL to Verilog RTL translator can be an interesting opensource project idea. It would not be very tough to implement since we need to convert from RTL to RTL -- no behavioural synthesis. Could it be proposed as a Google GSoC project? If you guys think this could be practical use, we could take it up with FOSSI Foundation so that they propose it as GSoC project.

Puneet Goel

unread,
Jan 27, 2017, 1:02:57 PM1/27/17
to Erik Jessen, Bryan Murdock, Freecellera

On Fri, Jan 27, 2017 at 11:25 PM, Puneet Goel <pun...@coverify.com> wrote:
An LLVM based SystemC RTL to Verilog RTL translator can be an interesting opensource project idea. It would not be very tough to implement since we need to convert from RTL to RTL -- no behavioural synthesis. Could it be proposed as a Google GSoC project? If you guys think this could be practical use, we could take it up with FOSSI Foundation so that they propose it as GSoC project.

In fact Verilog RTL -> LLVM -> SystemC RTL could be even more practical. It is much simpler to write RTL in Verilog than in SystemC.

Puneet Goel

unread,
Jan 27, 2017, 1:19:59 PM1/27/17
to Erik Jessen, Bryan Murdock, Freecellera
On Fri, Jan 27, 2017 at 10:07 PM, Erik Jessen <nbje...@gmail.com> wrote:
Take a look at the new Eclipse open-source work for capturing embedded-software requirements.  So far, everything I've seen says that one could take the generated code and co-simulate with the RTL.  Now, if that RTL were described in SystemC, no simulator-tool would be required.  And those same tools, if one added a symbol that indicated a register, could be used to actually graphically capture RTL implementation.

Personally, I am rather sceptical when it comes to RTL code generation. Aren't we trying to implement a kind of behavioural synthesis tool here?

I believe it is best to stick to hand coded Verilog RTL when it comes to design. The only other solution that I have come across that works is Chisel https://chisel.eecs.berkeley.edu/ .

I am not saying an Eclipse based solution would not work, but in my humble opinion it would be quite an effort to get that to work.

I am proposing Verilog RTL -> SystemC RTL converter only to cover up for lack of really fast opensource Verilog simulators. Even for SystemC, we will need to work on an efficient implementation of sc_bit and sc_logic. But since SystemC is now re-licensed under Apache license, there would not be any logistic impediment in doing that.

Puneet Goel

unread,
Jan 27, 2017, 1:23:28 PM1/27/17
to Erik Jessen, Bryan Murdock, Freecellera

On Fri, Jan 27, 2017 at 11:49 PM, Puneet Goel <pun...@coverify.com> wrote:
we will need to work on an efficient implementation of sc_bit and sc_logic

I meant efficient implementation of vectors sc_bv and sc_lv.

Erik Jessen

unread,
Jan 27, 2017, 1:29:15 PM1/27/17
to Puneet Goel, Bryan Murdock, Freecellera
Verilog RTL is *wonderful* for writing RTL.  Though I would say VHDL is better, because of the advantages of strongly-typed languages when doing reuse.
SC is like C++: trying to solve all the world's problems in a single language.  With all the attendant problems of being not great at any of them.  So I'd stick to SC for testbench (as needed).  But I really think a case could be made for Verilog+C#+Mono.

The idea being: the clock-domain world is all Verilog.
Event-domain world is C# using Mono for libraries (and there's a HUGE worldwide supply of tools, IP, etc. for doing C#).
DPI is used between clock-domain and event-domain worlds.

My understanding is that now Java is quite fast.

From a performance standpoint: once one says "directed random", one has said, "predictor".  And a predictor has all the functionality of the original RTL.  So your event-domain world is going to be 2-4x larger (at least) than your RTL world, in whatever metric you choose to use.

So you want to focus on making the clock-domain world to be as productive as possible, and be constructed so the pool of already-trained engineers is as big as possible.

Erik

Erik Jessen

unread,
Jan 27, 2017, 1:36:17 PM1/27/17
to Puneet Goel, Bryan Murdock, Freecellera
The Polarsys group at Eclipse is all about capturing the required behavior of software.
This comes into use in RTL verification in the following areaa:
a) Documentation - can't very well test what's not documented.
b) Directed random requires a predictor with same functionality as RTL (but all event-driven).  If somebody just captured an "executable requirements", why are you going to recode from scratch?
c) Directed random requires functional coverage - that requires documentation as well: what gets covered, when, and under what circumstances.  Functional coverage always traces to requirements, and somebody electronically captured the requirements.  Why not append/extend the requirements to include the functional-coverage required to verify that they've been tested?

I actually don't think that people will normally generate RTL from graphics - but I'd hate to preclude it.

I'd suggest just sticking with Verilog and one of the open-source simulators, rather than translate into SC.  you add a lot of library overhead when you go to SC, so I'd wonder if it'll be any faster.  It certainly will add more steps, and cycle-time is very important.

Erik

Olof Kindgren

unread,
Jan 31, 2017, 2:52:17 PM1/31/17
to Freecellera, nbje...@gmail.com, bmur...@gmail.com

Have you looked at verilator? It can generate both SystemC or plain C++ from synthesisable Verilog (and to an increasing amount systemVerilog). Verilator is of course a supported flow in FuseSoC as well :) We've used it quite a lot in the OpenRISC community and the Chisel guys have replaced their internal C++ backend and went with verilator instead.

It doesn't go through LLVM however if that was the criteria you were looking for. Actually, I'm wondering if you mean the LLVM IR when you say LLVM. A common IR has been high on our wishlist for some time, and it looks like we actually might have a contender with the FIRRTL language now. It still has zero tool adoption so far, but things might change.

Puneet Goel

unread,
Jan 31, 2017, 9:08:23 PM1/31/17
to Olof Kindgren, Freecellera, Erik Jessen, Bryan Murdock
On Wed, Feb 1, 2017 at 1:22 AM, Olof Kindgren <olof.k...@gmail.com> wrote:
Have you looked at verilator? It can generate both SystemC or plain C++ from synthesisable Verilog (and to an increasing amount systemVerilog). Verilator is of course a supported flow in FuseSoC as well :) We've used it quite a lot in the OpenRISC community and the Chisel guys have replaced their internal C++ backend and went with verilator instead.


Olof

I came across Verilator multiple times, but never actually used it. Thanks for reminding. I will download OpenRISC today and will see it work.
 
It doesn't go through LLVM however if that was the criteria you were looking for. Actually, I'm wondering if you mean the LLVM IR when you say LLVM. A common IR has been high on our wishlist for some time, and it looks like we actually might have a contender with the FIRRTL language now. It still has zero tool adoption so far, but things might change.

Yes I meant LLVM IR. I am exploring LLVM these days and will take a look at FIRRTL too.

Thanks for the pointers. Seems you guys are doing some great work.

Regards
- Puneet

Bryan Murdock

unread,
Feb 7, 2017, 2:45:39 PM2/7/17
to Puneet Goel, Freecellera
On Fri, Jan 27, 2017 at 10:18 AM, Puneet Goel <pun...@coverify.com> wrote:
>
>> Most of the UVM code simply implements design patterns straight from
>> the Gang of Four book. If we used a dynamic language that had those
>> design patterns baked into the language (e.g., Python, Perl, Ruby,
>> Javascript, etc.) we wouldn't need so much framework and boiler-plate
>> code in order to solve the problems that those design patterns solve.
>
>
> I think I can partly relate to you here. Dynamic typing could have resulted
> in a lot more code reuse.
>
> IMHO, SV complicates the situation further since class interfaces have not
> been put to use yet. SV also lacks function/operator overloading and as a
> result generic and container libraries are impossible to implement. I think
> UVM tends to overuse template and strategy patterns to cover up for lack of
> function/operator overloading.
>
>>
>> Not needing to spend time implementing GoF design patterns would free
>> up a lot of time and energy for solving more interesting problems.
>>
>> I have a more detailed write-up on this on my employer's internal
>> wiki. Let me see if I can make it public.
>
>
> We would be looking forward to your contribution.

Here it is, finally:

http://bryan-murdock.blogspot.com/2017/02/systemverilog-and-python.html

Bryan

Kevin Cameron

unread,
Apr 9, 2018, 1:08:31 PM4/9/18
to Freecellera
These guys are working on an LLVM based SystemVerilog -

https://www.metrics.ca/
http://deepchip.com/items/0580-02.html

That should be easier to link 3rd party code with, and the backend/runtime piece is probably going to be more accessible.

Kev.

Erik Jessen

unread,
Apr 9, 2018, 2:01:22 PM4/9/18
to Kevin Cameron, Freecellera
I sense the more advanced people are getting off of UVM as much as possible: using SystemC, property-checkers plus emulation, co-simulation (with C), co-emulation, etc.
UVM has to solve a problem - and if it does so in a more cumbersome way than other alternatives, people will go to other methods.

Erik

--
You received this message because you are subscribed to the Google Groups "Freecellera" group.
To unsubscribe from this group and stop receiving emails from it, send an email to freecellera+unsubscribe@googlegroups.com.
To post to this group, send email to freec...@googlegroups.com.

Bryan Murdock

unread,
Apr 10, 2018, 5:31:53 PM4/10/18
to Kevin Cameron, Freecellera
On Mon, Apr 9, 2018 at 11:08 AM, Kevin Cameron <camer...@gmail.com> wrote:
> These guys are working on an LLVM based SystemVerilog -
>
> https://www.metrics.ca/
> http://deepchip.com/items/0580-02.html
>
> That should be easier to link 3rd party code with, and the backend/runtime
> piece is probably going to be more accessible.

I didn't see any mention of LLVM at either of the linked pages. Do
you have some inside information?

Do they plan to open-source their LLVM-based simulator?

Bryan

Puneet Goel

unread,
Apr 19, 2018, 12:08:18 PM4/19/18
to Bryan Murdock, Kevin Cameron, Freecellera
Hello Freecellerists

Thank you Kevin for mentioning metrics. AFAIR, Cadence was the first one to come with cloud simulation platform. It seems they still have a cloud based solution on offer, though I believe not many people are ready to expose their IP to the cloud yet. I also found this article on Synopsys site, and it seems through the article Synopsys is trying to dissuade people from using Cloud.

On my end, I have been devoting time to make a release of opensource implementation of UVM that I have been working on for the last few years. Since I also work on customer projects to earn my wages, progress has been slow compared to what I would have liked it to be. I am hoping to make an alpha release in a couple of weeks from now.

Here are some features you can look forward to:

1. Standalone opensource UVM implementation that does not require SystemVerilog license. We do not support SystemVerilog syntax, but the syntax is C-flavoured and therefore similar to SV.
2. Seamless integration with Icarus Verilog via PLI.
3. Integration with Xilinx Vivado simulator via DPI (still working on it -- hope to include it in alpha release)
4. FLI integration with modelsim for verifying VHDL designs. VHPI integration is in the works to enable integration with other VHDL simulators including GHDL.
5. When integrated with any Verilog/VHDL simulator, our UVM implementation runs on a parallel thread, thus enabling faster testbench simulations.
6. This implementation of UVM would be multicore enabled. I do not know of any other implementation that enables multicore parallelism.
7. We use LLVM to compile. We have been able to create binaries for ARM as well.
8. With ARM binaries, we have successfully run UVM testbenches on Zynq and Cyclone V boards thereby mapping the DUT to the FPGA with UVM testbench executing on onchip HPS (ARM Cortex A9 processor).
9. We have been able to seamlessly integrate this UVM with Qemu, thus creating a sophisticated hardware/software coverification setup/

Our implementation is a port of UVM version 1.2. We have considerable support for constrained randomization. Though we have created fundamental constructs for functional coverage, we do not think we will be able to stabilize it for this release.

Looking forward to support from this community.

Regards
- Puneet

--
You received this message because you are subscribed to the Google Groups "Freecellera" group.
To unsubscribe from this group and stop receiving emails from it, send an email to freecellera+unsubscribe@googlegroups.com.
To post to this group, send email to freec...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages