This is way out ahead of everybodys skis, but it won't hurt to have time to process it.
Questions
This is an investigation of core processor behavior rather than a co-processor thing. There are four main questions for choosing a board to experiment with:
- Resources: What FPGA resources does a project like this need?
- Peripherals: Since we eventually want to run this as something like a single-board computer (SBC), what peripheral requirements do we have that might not be normal or typical in the SoC/FPGA world?
- Cost: What does it cost to get the features and resources we need on a suitable board, and how do we minimize that?
- Device Style: We could use either an SoC or an FPGA. What are the pros and cons? "SoC" in this context, means an FPGA with a hard core (in this case ARM Cortex) and some peripherals.
Resources
My sense is that we're going to end up with two board selections. One will be larger for the initial hardware development part of this, and will be somewhat expensive. We won't know what our FPGA resource requirements are until we start working on things, and under those circumstances it seems better to have and not need. The two boards I'm currently considering run $1043 (FPGA only) and $1669 (SoC with hard cores). I need to validate that by throwing some things into the simulator; they may be total overkill. It won't take that long to have a clearer picture, and we can select a cheaper board that better reflects what we actually need.
The other will be a board to run Coyotos on. By that point we'll have a handle on what is needed for what our pseudo-SBC, and we should be able to find cheaper alternatives. I'm guardedly hopeful that we might get away with a board like
this one using the Artix-7 XC7A200T part. At the price, it's pretty tough to beat. The number of collaborators we can interest is going to have a lot to do with that price tag.
It's also possible to rent FPGA boards on AWS. $1.50-ish an hour sounds cheap until you forget to turn the thing off one night. Financially, I think it's a risky approach.
Peripherals
All of these are PCIe cards, but it is possible to run them standalone on a desktop. If you end up running this as a standalone card, my sense is that you'll probably want connectors for, a keyboard, an ethernet, and an SSD as peripherals. At first glance it looks like the FPGA-only boards are cheaper, but the stuff you need to add to support an SSD typically erases the price difference, and the hardware support for those interfaces takes up space in the FPGA fabric. Those issues may make the SoC solutions cheaper, because most of this stuff is already on the SoC boards.
Since you'll need to run the design software on a PC, an alternative is to drop the card into the PC as well and configure PCIe support on the card so that you can talk to it. I don't know yet what the tools provide, but worst case we can rig up an application on the PC side that snoops a frame buffer, implements some kind of keyboard-like transfer, and provides a block of host-side disk or SSD to the card to use as a drive. There are existing sample projects that provide VHDL for a lot of this. I'm expecting this to be the approach that I take.
Third option is to run some kind of remote desktop option on the board. At that point you can either run the board standalone or have it in a PCIe slot. If you're running something like this, you're not going to want to wait for the FPGA design to be stable, so I suspect in this case you'll want an SOC so that you have the hard core to run things on that while the FPGA work is still falling over. In that case you'll probably want an SSD on the board, but the hard core boards all provide an M.2 connector.
Treating the board as a card in your PC rather than standalone avoids some frankenstein cabling, which seems convenient.
Cost
I think I've addressed this as far as it can be answered at this time.
Device Style
There's a strongly held view on Reddit that if you're trying to build a CPU rather than a co-processor you should use an FPGA and not try to deal with all of the distracting SoC stuff. It means doing all of the peripheral support in the FPGA fabric, which has resource and budget implications, but I can definitely appreciate the "don't try to deal with two machines at once" mindset here. In a pinch, you can incorporate a small, vendor-provided soft core in the FPGA alongside the experimental core so that you have a place to run linux.
The counterargument (in my mind, anyway), is that modifying a CPU is a complex undertaking, and it can be helpful to have a place to stand out on the part to deal with debugging, device bridging, power management, and stuff like that. Then there's the advantage that on an SoC board all of the peripherals are attached to the hard core. Having spent some quality time with the manuals and a bunch of YouTube videos, things are more flexible than the nay-sayers believe. Yes, a few devices will need to be proxied by the hard core, but that really isn't all that exciting. I think there is some element of "kernels are a little alien to FPGA people, so better not to have to mess with it", though that's certainly not universal. I'm more or less coming from the other side of that mirror; I don't see a baby kernel for the hard core as particularly hard, but I think that isolation between the dev support code and the processor on the FPGA is really helpful if you are going to be running two cores,
Another question, I suppose, is what other things people may want to explore. If you're interested in playing with an ARM multicore, you can do that on the Zynq SoCs, but it isn't present on the Kintex FPGAs. I find something a bit amusing about running Linux - or better still Coyotos :-) - on the ARM core so that it has something to do.
The real issue, I think, is how you want to talk to the outside world, how many devices you end up adding to the FPGA to do it, and how much space they take up. The "stick the board in and snoop the framebuffer" approach is low overhead on either device. If you want actual devices, there's a tax involved on the pure FPGA, because the devices have to be added to the FPGA logic.
Summarizing, I think either one can be a reasonable selection, and I feel like I need to get a PC set up and spend some time in the simulators to find out what other concerns I haven't seen yet.
Jonathan