Hardware-assisted Coyotos: first look at board options

14 views
Skip to first unread message

Jonathan S. Shapiro

unread,
Feb 25, 2026, 6:54:26 PMFeb 25
to cap-talk
This is way out ahead of everybodys skis, but it won't hurt to have time to process it.

Questions

This is an investigation of core processor behavior rather than a co-processor thing. There are four main questions for choosing a board to experiment with:
  1. Resources: What FPGA resources does a project like this need?
  2. Peripherals: Since we eventually want to run this as something like a single-board computer (SBC), what peripheral requirements do we have that might not be normal or typical in the SoC/FPGA world?
  3. Cost: What does it cost to get the features and resources we need on a suitable board, and how do we minimize that?
  4. Device Style: We could use either an SoC or an FPGA. What are the pros and cons? "SoC" in this context, means an FPGA with a hard core (in this case ARM Cortex) and some peripherals.
Resources

My sense is that we're going to end up with two board selections. One will be larger for the initial hardware development part of this, and will be somewhat expensive. We won't know what our FPGA resource requirements are until we start working on things, and under those circumstances it seems better to have and not need. The two boards I'm currently considering run $1043 (FPGA only) and $1669 (SoC with hard cores). I need to validate that by throwing some things into the simulator; they may be total overkill. It won't take that long to have a clearer picture, and we can select a cheaper board that better reflects what we actually need.

The other will be a board to run Coyotos on. By that point we'll have a handle on what is needed for what our pseudo-SBC, and we should be able to find cheaper alternatives. I'm guardedly hopeful that we might get away with a board like this one using the Artix-7 XC7A200T part. At the price, it's pretty tough to beat. The number of collaborators we can interest is going to have a lot to do with that price tag.

It's also possible to rent FPGA boards on AWS. $1.50-ish an hour sounds cheap until you forget to turn the thing off one night. Financially, I think it's a risky approach.

Peripherals

All of these are PCIe cards, but it is possible to run them standalone on a desktop. If you end up running this as a standalone card, my sense is that you'll probably want connectors for, a keyboard, an ethernet, and an SSD as peripherals. At first glance it looks like the FPGA-only boards are cheaper, but the stuff you need to add to support an SSD typically erases the price difference, and the hardware support for those interfaces takes up space in the FPGA fabric. Those issues may make the SoC solutions cheaper, because most of this stuff is already on the SoC boards.

Since you'll need to run the design software on a PC, an alternative is to drop the card into the PC as well and configure PCIe support on the card so that you can talk to it. I don't know yet what the tools provide, but worst case we can rig up an application on the PC side that snoops a frame buffer, implements some kind of keyboard-like transfer, and provides a block of host-side disk or SSD to the card to use as a drive. There are existing sample projects that provide VHDL for a lot of this. I'm expecting this to be the approach that I take.

Third option is to run some kind of remote desktop option on the board. At that point you can either run the board standalone or have it in a PCIe slot. If you're running something like this, you're not going to want to wait for the FPGA design to be stable, so I suspect in this case you'll want an SOC so that you have the hard core to run things on that while the FPGA work is still falling over. In that case you'll probably want an SSD on the board, but the hard core boards all provide an M.2 connector.

Treating the board as a card in your PC rather than standalone avoids some frankenstein cabling, which seems convenient.

Cost

I think I've addressed this as far as it can be answered at this time.

Device Style

There's a strongly held view on Reddit that if you're trying to build a CPU rather than a co-processor you should use an FPGA and not try to deal with all of the distracting SoC stuff. It means doing all of the peripheral support in the FPGA fabric, which has resource and budget implications, but I can definitely appreciate the "don't try to deal with two machines at once" mindset here. In a pinch, you can incorporate a small, vendor-provided soft core in the FPGA alongside the experimental core so that you have a place to run linux.

The counterargument (in my mind, anyway), is that modifying a CPU is a complex undertaking, and it can be helpful to have a place to stand out on the part to deal with debugging, device bridging, power management, and stuff like that. Then there's the advantage that on an SoC board all of the peripherals are attached to the hard core. Having spent some quality time with the manuals and a bunch of YouTube videos, things are more flexible than the nay-sayers believe. Yes, a few devices will need to be proxied by the hard core, but that really isn't all that exciting. I think there is some element of "kernels are a little alien to FPGA people, so better not to have to mess with it", though that's certainly not universal. I'm more or less coming from the other side of that mirror; I don't see a baby kernel for the hard core as particularly hard, but I think that isolation between the dev support code and the processor on the FPGA is really helpful if you are going to be running two cores,

Another question, I suppose, is what other things people may want to explore. If you're interested in playing with an ARM multicore, you can do that on the Zynq SoCs, but it isn't present on the Kintex FPGAs. I find something a bit amusing about running Linux - or better still Coyotos :-) - on the ARM core so that it has something to do.

The real issue, I think, is how you want to talk to the outside world, how many devices you end up adding to the FPGA to do it, and how much space they take up. The "stick the board in and snoop the framebuffer" approach is low overhead on either device. If you want actual devices, there's a tax involved on the pure FPGA, because the devices have to be added to the FPGA logic.


Summarizing, I think either one can be a reasonable selection, and I feel like I need to get a PC set up and spend some time in the simulators to find out what other concerns I haven't seen yet.


Jonathan

William ML Leslie

unread,
Feb 28, 2026, 1:51:17 AM (13 days ago) Feb 28
to cap-...@googlegroups.com
In other words: how much FPGA is the correct amount of FPGA for doing CHERI research?

Jonathan S. Shapiro

unread,
Feb 28, 2026, 7:52:25 AM (13 days ago) Feb 28
to cap-...@googlegroups.com
On Fri, Feb 27, 2026 at 10:51 PM William ML Leslie <william.l...@gmail.com> wrote:
In other words: how much FPGA is the correct amount of FPGA for doing CHERI research?

More or less. But also: for me, personally, there are things I want to explore that are not related to Coyotos. I don't want my interests to become a board cost problem for people who don't share them.

Since the Coyotos kernel supports multicore, it's important to have that available on the FPGA - at least as an option. I spent some time kicking ChatGPT around this evening to figure out what resources are needed for that. We can skip vector support for now, or implement it in software, or possibly implement in the style of an offline coprocessor. If we do, ChatGPT thinks we ought to be able to get a satisfactory multicore into 200k Xilinx logic cells along with a PCIe controller. If that is true, then we are probably good with the XC7A200T board at $361 (basically an Arty A7 with twice the number of cells), or its Kintex UltraScale+ sibling at $890. There's an Artix US+ option in there (XAU25P) at $765, but for the extra $130 the Kintex is faster and better resourced on DRAM and a couple of other things.

If you want to mess with out-of-order microarchitecture (as I do), the minimum resource is more like 475k logic cells. 1.5M is better. Needless to say, I won't be buying that last one.


I'm planning to spend a chunk of today loading existing soft cores into the tool chain and running the necessary steps to find out how the real numbers come out.

In preparation for that, I spent some time asking ChatGPT to generate plans for migrating from XC7A200T to the various other options. Some changes are definitely needed. The newer generation has different timing options, there are some minor differences in the IP blocks, and so forth. ChatGPT turns out to be surprisingly good at producing reasonable instructions for migrations. I may try dropping the cheriot-kudu design into Codex to see what happens when I say "port this to Artix".

I can't afford to buy all of these boards unless somebody wants to pitch in some cash, and the "minimum port" looks modest enough that Codex can probably handle it. But let's get something working first. :-)


Jonathan

Ben Laurie

unread,
Feb 28, 2026, 11:11:00 AM (13 days ago) Feb 28
to cap-...@googlegroups.com
It depends a lot on what variety of CHERI research you want to do, since CHERI runs on everything from teeny-tiny microcontrollers (such as the Ibex used in OpenTitan) right up to datacentre class CPUs (e.g. the Arm Morello prototype).

BTW, I should note that there are $7k FPGAs going on eBay for 1% of list ... surplus from Alibaba or something like that, apparently.

Once more, I would be happy to come to a Friam or a specially arranged meeting to discuss CHERI and dig into questions like this more deeply.

The Sonata board, for example, is sufficient to run CHERIoT (which is the CHERI-Ibex mentioned above): https://lowrisc.org/news/sonata-v1-0-release/.


On Sat, 28 Feb 2026 at 06:51, William ML Leslie <william.l...@gmail.com> wrote:
In other words: how much FPGA is the correct amount of FPGA for doing CHERI research?

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/cap-talk/CAHgd1hEhUNcLgarg%3Dnhbve3A-VWzSyyYvK7d-PNfUhZpOLNnVA%40mail.gmail.com.
Reply all
Reply to author
Forward
0 new messages