thank you, that is useful, didn't know about this best practice about SVG -> WebGL texture uploads.
What I'm searching for is the high level approach about how an SVG is renderered in real time, especially the <path> element if it's animated.
As soon as the path data is animated, the SVG would need to be re-uploaded into a texture every frame, which might not be feasible performance wise - there can be many SVGs.
Or if the SVG itself is animated then instead of uploading the entire SVG content into a texture in full size and rendering that texture on a quad with appropriate transforms, it would be better to render only the visible part depending on the camera view to keep it sharp instead of pixelated (so in screen space) and also avoid rendering parts which may be cut off the screen. In this case the SVG would need to be re-uploaded every frame too, instead of just animating the quad with the svg texture.
If I ignore self-colliding paths and compound paths (ie. multiple subpaths) with various fill-rule settings, then I can triangulate the shape and simply draw the triangles. Of course I may have to do a finer level of triangulation depending on the screen space size of the SVG, but at least the drawing is fast if the SVG/camera is only being translated during animation.
However this doesn't work when compound paths/fill-rule needs to be supported, and that's why I was wondering what browsers do. Do they create triangles, or do they implement fill-rule in the fragment shader? Do they convert the paths into a hierarchy of boolean operations?
In the SVG spec fill-rule is defined by shooting rays to infinity and counting intersections when determining whether a point is inside the shape or not, and I wonder if that's how it's actually implemented or if there is a better/faster way?