Hi folks,
HLS4ML noob here! For the CNN example project from HLS4ML's tutorial repo on github <
https://github.com/fastmachinelearning/hls4ml-tutorial/blob/main/part6_cnns.ipynb >, I have synthesised the model in Vivado HLS and have opened the generated IP in Vivado.

This is probably a very basic question but how do you input data into the model?
As there are 3x 16 bit input vectors, I imagine one needs to enter 16 bits of the "Red" part of the pixel, 16 bits of the "Green" part of the pixel, and 16 bits of the "Blue" part of the pixel into the input vector ports, clock in all the pixels like with a raster, and drive the ready and done (etc.) signals like regular AXI transactions, but I don't know if there's a normal way to get an image into a CNN when doing it embedded, or if there's just a known way with these sorts of cores.
Thanks in advance,
Roland