Porting Corundum to ZCU106

235 views
Skip to first unread message

John Burke

unread,
Aug 6, 2020, 1:35:42 PM8/6/20
to corundum-nic

Hi Alex,

Thanks for your work on Corundum, it really is impressive. For some work I'm involved with, I'll be porting your work to the ZCU106. I've done a handful of tests to get myself acquainted with everything, but I still wanted to ask some quick guidance for porting. It looks like in both the mqnic and mqnic_tdma I'll want to make a ZCU106 directory, where I'll start with your fpga_10g from the VCU118 as a base since I have access to one. Using the VCU118 fpga_10g as a reference, I'll have to set up my own constraints, probably modify both the ip and faga directory, in rtl alter the fpga.v and fpga_core.v, and in tb modify the test_core_fpga.py.

I think I've covered a lot of what will need to be done, but was hoping if I missed anything you might be willing to give some quick guidance.

Thanks John

Alex Forencich

unread,
Aug 6, 2020, 1:59:13 PM8/6/20
to corund...@googlegroups.com

ZCU106, with the ARM as the host, or connected to a server over PCIe?

I actually recently received a ZCU106 from Xilinx to facilitate development of Corundum, primarily with the Zynq as a host, but as a stepping stone I will probably also implement PCIe, even though the edge connector is only a x4.  Hopefully that won't break anything as the narrowest PCIe interface I have tested so far is gen 3 x8.  I already have a top-level constraints file put together from porting the verilog-ethernet code, though this has not yet been committed as I am still figuring out the zynq development flow (this is my first experience with zynq).  I have also been working on getting an AXI master DMA interface module working that should be able to interface with the Zynq hard logic.  Works in sim, haven't tested it on hardware yet. 

Sounds like you have most of the bases covered though. 

Alex Forencich
--
You received this message because you are subscribed to the Google Groups "corundum-nic" group.
To unsubscribe from this group and stop receiving emails from it, send an email to corundum-nic...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/corundum-nic/f1188409-6735-41a0-9ecf-87d383a5e395n%40googlegroups.com.

Alex Forencich

unread,
Aug 6, 2020, 2:45:35 PM8/6/20
to corund...@googlegroups.com

Also, no need to create the tdma version right off the bat.  Once you get the normal version working, the TDMA version is essentially just a copy, with a couple of parameters changed in fpga_core.v and some additional testbench code in test_fpga_core.py. 

Alex Forencich
On 8/6/20 10:35 AM, John Burke wrote:
--

Alex Forencich

unread,
Aug 7, 2020, 2:41:36 AM8/7/20
to corund...@googlegroups.com

Save you some time: https://github.com/ucsdsysnet/corundum/tree/master/fpga/mqnic/ZCU106/fpga_pcie

If you need the tdma version, let me know, I'll add that as well. 

The AXI version with the Zynq as a host will be coming along at some point, though it might be a little while.  That one is a bit more involved than simple porting. 

Alex Forencich
On 8/6/20 10:35 AM, John Burke wrote:
--

John Burke

unread,
Aug 7, 2020, 4:02:49 PM8/7/20
to corundum-nic
Oh wow!

Thanks Alex! I've just pulled the recent stuff and I'll be testing it out pretty soon. As for the ARM vs server stuff, I'm definitely interested in seeing both ends, but the way it seems to mostly be right now is the near term.

How do you envision the AXI version with the Zynq as the host looking if you don't mind me asking?

Thanks
John

Alex Forencich

unread,
Aug 7, 2020, 4:16:56 PM8/7/20
to corund...@googlegroups.com

Well, the main HDL I need to get hammered out is a DMA interface module that speaks AXI instead of PCIe TLPs.  I have something working in sim, so I will need to test that on hardware to get all the kinks worked out.  Then, the core corundum code will connect to the Zynq PS hard logic with an AXI lite slave interface for configuration and an AXI master interface for DMA.  I have also made some changes to the driver to try to clean out as much PCIe-specific stuff as possible.  The idea will be to have the kernel module register itself both as a PCIe device driver as well as a platform device driver, with separate probe and remove functions and most of the rest of the code used for both.  I will also have to adjust the Corundum driver simulation model so it can be used with AXI endpoints models as well as the PCIe root complex model. 

The main thing is to figure out how the tools work for Zynq, as well as how to set things up so that the IPI flow does what it's supposed to do.  This will probably require reorganizing things a bit, as well as writing some additional wrapper code.  I'm not a fan of the IPI flow and graphical design in general, but it's the only way to connect to the ARM side of the Zynq, so I'll have to figure out how to make it work in an automated manner so I can drive the whole thing from makefiles like I do for everything else. 

Alex Forencich

John Burke

unread,
Aug 20, 2020, 2:50:31 PM8/20/20
to corundum-nic

Sorry for my delayed reply,

It looks like I can get through most things, but I'm stuck a bit mainly because I need to troubleshoot sitting next to the device, which hasn't been the norm recently.

A small note for the ZCU106, I had to create a `common` symlink in its rtl directory to get things to build. Hopefully I'll have more info tomorrow.

Thanks
John

Alex Forencich

unread,
Aug 20, 2020, 3:31:04 PM8/20/20
to corund...@googlegroups.com

Whoops, missed that symlink in the original commit, should be fixed now. 

Alex Forencich
Reply all
Reply to author
Forward
0 new messages