Hi Jan,
first, there is no need to replace the drivers with NullDrivers.
OpenPnP has a built-in G-code simulator for external controllers.
The simple way is this:
Switch driver communications to tcpip and set ip address as GcodeServer (case sensitive). Port doesn't matter.
A more elaborate way is this:
Be sure to exit OpenPnP.
Take the machine.xml of your machine.
Replace the ReferenceMachine class with SimulationModeMachine:
Restart OpenPnP.
This gives you a new tab with various settings. It lets you switch the machine to various levels of simulation:

As soon as you enable any simulation mode (not Off), the
Replace Drivers? switch let's you implicitly
replace the Driver communications with a GcodeServer, i.e. you can
keep the original machine config intact.

The tab lets you simulate various imperfections of the machine, or in your case, you can emulate those found in your real machine.
You still need to replace the cameras with the ImageCamera and SimulatedUpCamera (as you surely already know how).
> replacing the image of the ImageCamera with a photo of the physical machine
Cool project. I was thinking a lot about this but never actually tried. I see two possibilities:
Am I right to assume you're trying method 1.?
I fear it is very difficult to get precise enough images. We need
±0.2mm precision or so across the table, I don't expect any lens
to be that good. If you still want to try, consider using a wider
lens (a prime if possible) and then only taking the most
undistorted central part of the image, assuming you still have
enough resolution. And it will only work if your table and the
feeders surfaces are really at the same Z. Otherwise you'll get
bad scaling errors from perspective distortion.
I always wanted to add code in OpenPnP itself to do method 2.
automatically. It would have to work using external diffuse
light with as little shadow produced by the moving gantry, as
possible. No way the built-in LED ring light would work, it would
produce an extremely "checkered" stitch.
The advantages are clear: you get a real camera view image with
the same resolution and overall quality. Precision across the
table is not a problem, i.e. it corresponds exactly to what you
get in operation. Perspective distortion across the table is
eliminated, and what little is present in the individual shots,
could be reduced by stitching with much overlap and essentially
only taking a small central part of the photo. Getting the scale
right (Units per pixel) would actually be a part of the algorithm
(using stereoscopy to determine at which offsets the images
overlap with least pixel difference). I was even thinking about
detecting variable 3D depth and then reconstructing
perspective in simulation 😁 (OpenCV has some functions to use
there).
The code could then automatically create and configure the
ImageCamera, and make sure it replaces the real camera. I guess I
would make the cameras interchangeable, somehow, i.e. so you can
switch back and forth between real machine operation and
simulation (see the Simulation Mode above).
But of course that would require quite some heavy coding to do it... cool coding though 😎
> Where is the x=y=0 point with respect to the image?
Unfortunately, this can't be configured so far. The controller
coordinates (as interpreted by GcodeServer) are directly taken as
the pixel coordinates (shifting of the center of the camera),
after being scaled down by the Device Settings tab Units
per Pixel. But it could be easily added.
Perhaps (as a arather unintended side effect) you could use the Homing
Error? (not sure and no time right now to go look).
> What "Units per Pixel" shall I configure on the "Device Settings" tab?
You need to figure this out by in measuring the distance of two
known points in the real, and then find those points in the photo
and determining the pixel distance. Use the Euclidean distance
formula = sqrt(dx² + dy²). Note: pixels should be square,
or this becomes much more complex.
> does the "Viewing Scale" change the distances on the machine?
Not really. It does just that, scale the image, as seen by the camera. This happens after the image was shifted to the right position using the Device Settings tab Units per Pixel.
I would not recommend using Viewing Scale at all. I only
used it to test automatic camera calibration in I&S. If you
need a different image resolution, just scale the photo outside
OpenPnP.
> camera jogging does not work at all.
Check Issues & Solutions. Chances are, you haven't
mapped the X, Y axes, i.e. they were lost, when you replaced the
camera.
_Mark
> That's very cool! However, I've got some exception when
starting the machine without enabling the simulation mode first.
On second try I enabled the simulation mode before starting the
machine and everything works out of the box.
Would need more info (log).
> adding an offset feature to the ImageCamera (I could create a PR if you thing it might be useful for others too).
Yes please.
> However, there is room for improvements int the code:
when changing the scaling, the general Units per Pixel have to
be updated for correct ruler overlay and precise camera jogging.
I've fixed that as well (can provide a PR).
This is a misunderstanding. I added the scaling expressly so I can test Units per Pixel calibration (I&S). Fixing the general Units per Pixels automatically would defy the very purpose 😁
Like I said: if you want to scale the image, I recommend doing it
externally. There are amazing upscalers nowadays, including those
employing AI to "guess" more detail that what the image actually
provides. I expect these to correctly recognize circular features,
like sprocket holes, PCB fiducials, etc. so you can more
realistically test vision. plus the miage would appear much
sharper.
> I think, I've found out what went wrong: I completely
misunderstood the "Pixel Dimension" properties. Now I think they
describe the dimension of the image, that is simulated.
Correct.
_Mark
Hi Jan
> I replayed the situation and found that its a timeout exception in the GCodeDriver that happens. Which is understandable because the serial link to the machine is present but the machine is off...
This should still not happen. It seems it tries to disconnect
when it is not connected in the first place. I guess it is no big
deal, but I'll keep it in mind.
> Would you please suggest an upscaler that accepts about
4kx3k images for scale to ~16kx12k? I found a few services
accepting only small inputs up to 1 or 2k. Thank You!
Maybe not online. I just used Gigapixel AI trial, it seems to
work, I scaled a ~6kx6k to 24kx24k, and used it in a ImageCamera.
Not free though, the trial adds a watermark all over.
https://www.topazlabs.com/gigapixel-ai
I will keep looking (this is something that interests me in other
areas as well). I might actually buy a product, and then perhaps
you could send your image to scale.
_Mark
I just tried the open source program Upscayl:
https://github.com/upscayl/upscayl
It scaled to 20kx20k no problem.
I made an artificially blurred and downsized image to test and it
seems to do a good and realistic looking job on the REMACRI
setting:


To unsubscribe from this group and stop receiving emails from it, send an email to openpnp+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openpnp/217cfdaf-3a76-4b38-a240-bc7834a94387n%40googlegroups.com.