Parallel "Threads" In PyMTL Testbench

37 views
Skip to first unread message

Enze

unread,
Feb 19, 2021, 12:15:35 PM2/19/21
to pymtl-users
I'm writing a testbench which stimualtes a dut. After stimulating the dut with an input transaction, an unknown number of cycles later, the dut outputs a processed transaction response.

The dut will be continuously stimulated with input transactions, such that outputs will take place while input transactions are still being sent in.

Because the sending of inputs and listening for inputs are independent, I'd normally code this with multiple initial-begin blocks in System Verilog (or a fork-join), where 1 block is responsible for loading test vectors into the dut and the other listens for dut output vectors to save to disk.

Is there any equivalent feature in pymtl to spawn 2 independent processes that simultaneously read/write to a dut in the same testbench? 

I don't think it'd be possible to mix the send-input and listen-for-output logic in the same loop, because we'd need to execute code based on 2 different state machines (input and output state), and a given stimulus vector/ transaction can last several hundred cycles, this could get quite messy.

Christopher Batten

unread,
Feb 19, 2021, 12:21:04 PM2/19/21
to Enze, pymtl-users

Hi Enze,

I think you just need two update blocks? One for writing inputs of the DUT and one for reading outputs from the DUT? These two blocks will not be writing the same port at any time correct?

Best,
Chris
> --
> You received this message because you are subscribed to the Google Groups "pymtl-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pymtl-users...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/pymtl-users/270106db-37b5-416a-9dc9-fbdad086ffdan%40googlegroups.com.

Message has been deleted

Enze

unread,
Feb 19, 2021, 12:40:39 PM2/19/21
to pymtl-users
Hi Chris,

Thanks, I hadn't realized update blocks can be used in testbench code.

This solution would be straightforward if an input or output lasted 1 cycle. Then I could just do

@s.update
# Send input

@s.update
# Read output

However, sending an input lasts ~100s cycles and involves iterating through an array. I suppose the @s.update could still work if I have variables to keep track of where I am in the array, i.e.,

input_x = 0
input_y = 0

dut.input_data.value = test_vector[0,0]

@s.update
def input_update():
  dut.input_valid.value = 1
  input_x += 1
  if input_x >= x_lim:
    input_x = 0
    input_y +=1
  if input_y >= y_lim:
    dut.input_valid.value = 0

And then I'd need to do similarly for the output.

This is a bit more verbose than being able to have nested for-loops, but it'll work. I was hoping for a construct where I don't need to explicitly check the loop indices:

initial begin
  dut.input_valid.value = 1
  for i in range(x_lim):
    for j in range(y_lim):
       dut.input_data.value = test_vector[i,j]
       sim.cycle()
 dut.input_valid.value = 1
end

And then I can do the same initial-begin-end for the output.

Enze

unread,
Feb 19, 2021, 3:22:51 PM2/19/21
to pymtl-users
As a brief update, it seems like the test_harness idiom in the pymtl3 examples is what I need.

stdlib/test_utils has both test_sources and test_sink objects, and I can write a custom source/sink that will deconstruct test vectors from my input file to wire-level dut inputs, and reconstruct the test vectors from the wire-level dut outputs. 

It's a different coding style to need to instantiate class objects in order to get concurrent behavior in testbenches, rather than doing fork/join (as in Chisel testers and SV) or multiple initial-begin blocks (as in SV). The state of the concurrent threads is kept explicitly as member variables of the testbench objects rather than implicitly in the fork-join thread contexts. PyMTL, in this way, handles concurrency in both testbenches/RTL with the same syntax annd programming model. This took a bit of time to catch onto.

In any case, great project so far. Looking forward to seeing the features and documentation develop.

Christopher Batten

unread,
Feb 21, 2021, 10:23:20 AM2/21/21
to Enze, pymtl-users

Hi Enze,

Right ... we usually use the "instantiate components to create test benches" approach. It does indeed mean the approach we use in TBs is similar to the design RTL ...

I guess an important thing to keep in mind is that our current PyMTL simulation passes are all single-threaded. It definitely might be possible to write test benches in a different way especially if using greenlets which would enable us to do some TB code and then let the simulator tick for a few cycles ... You can already kind of do this with FL modeling, but I don't think it is quite what you want. Actually I think you could kind of do what you want with FL modeling but you would still need to explicit update blocks to be able to manage the input and output sides concurrently.

Something to think more about!
Chris
> To view this discussion on the web visit https://groups.google.com/d/msgid/pymtl-users/7fdedade-d588-4e26-aae7-d0a9cebeff7en%40googlegroups.com.

Reply all
Reply to author
Forward
0 new messages