The MacOS, iOS, Android, and Linux versions all render the display by painting each black pixel separately as a tiny black square or rectangle. I chose this approach, rather than the more direct bitmap rendering APIs, because the latter perform smoothing when pixels are scaled up, causing the display to look blurry.
A side effect of rendering pixels as individual squares / rectangles is that you can see faint lines between the pixels. I actually kind of like this artifact, because it resembles a real 42S-style LCD. But of course the artifacts you're seeing in the screen shots above are not desirable. I'll have to look into using a different rendering method when pixels are very small.
Note that none of this affects the Windows version. The approach of painting individual pixels can't be used there, because the GDI+ API that I'm using doesn't support sub-pixel addressing, and instead I use bitmap rendering with resolution-dependent interpolation modes, and using an intermediate bitmap with increased resolution (4x4 pixels for each logical pixel) to reduce the smoothing-induced blurriness.
The best solution would probably be to use an intermediate bitmap sized just so that it corresponds 1:1 with the physical display pixels. That way I would have full control over the rendering process, and the final step of copying the bitmap to the display wouldn't involve any scaling at all, and thus wouldn't create any smoothing artifacts. But it would be a PITA to implement as well!
I'll have to think about this... I'm putting it on my to-do list.