Another possible performance enhancement

54 views
Skip to first unread message

Olivier Brault

unread,
Nov 10, 2017, 3:57:11 AM11/10/17
to marlin-renderer
Hi Laurent,

In my app, I draw a lot of "polyline" objects (French high voltage electric network) . it represents about 1 millions vertices.
In order to increase the perf, I have implemented an adaptative simplification before creating the Shape object to draw.
By adaptative, I mean that the level of simplification depends on the zoom : I specify an error in pixels, (for ex. 1 px max), then I transform this tolerance in real length (it depends on the zoom level) and then I simplify the geometries of the polylines, and at the end, I create Shape objects.
Therefore, the more I zoom, the more details remain : the drawing needs more vertices, but many objets are out of view so the perf are quite constant with the zoom.

In my case, the time spent in the simplification pass (classical Douglas Peucker) is less than the time saved during the drawing.

For example, the full draw with all vertices is about 500 ms, and with a tolerance of 1px, the drawing takes about 150 ms only, including the simplification pass.

I wonder if this kind of simplification could be done directly in Marlin, from a Shape object, before effectively drawing it.
There are faster algorithms than Douglas Peucker.
Very often, simplification algorithms don't respect topology (after simplification, a polygone may intersect himslef) : topology-proof algorithms exist but are very slow : JTS library proposes this kind of algoritm.
But in the case I speak about, the tolerance is very low (in pixels) so that the simplification is almost invisible : even if a polygon intersects himself, it is not visible.

Have you ever considered this kind of optimisation in Marlin Renderer ?

Olivier

Olivier Brault

unread,
Nov 10, 2017, 4:47:12 AM11/10/17
to marlin-renderer
Hum sorry, I just read an old post where you explain that simplification adds an overhead bigger than the time spent to draw many vertices.
Maybe it depends on the situation (in my case, the gain is x3 ! ...)
Maybe it's worth trying to simplify only under certain conditions to avoid useless overhead
for ex. when the number of vertices > xx
or when the number of vertice, compared to the bounding box size of the Shape exceeds a certain ratio ...

unfortunately, I'm not an expert in that domain ...

Olivier

Laurent Bourgès

unread,
Nov 13, 2017, 4:31:30 AM11/13/17
to marlin-...@googlegroups.com
Hi Olivier,

In my app, I draw a lot of "polyline" objects (French high voltage electric network) . it represents about 1 millions vertices.

Some tuning could help Marlin:
- initial edges: increase a lot
- degrade AA subpixel count to 4x4 (log2=2)

Use the doStats flag to collect marlin stats in your case and send the logs (std out) so I can help you tuning Marlin to your use case.

In order to increase the perf, I have implemented an adaptative simplification before creating the Shape object to draw.

Excellent idea. I advice you to prepare the view shape according to the zoom level once and reuse it as long as you can and render it.

It will reduce the simplification overhead and do not make it everytime you render the frame !

By adaptative, I mean that the level of simplification depends on the zoom : I specify an error in pixels, (for ex. 1 px max), then I transform this tolerance in real length (it depends on the zoom level) and then I simplify the geometries of the polylines, and at the end, I create Shape objects.

Do you use Path2d.getPathIterator(tolerance) or custom code ?

I could do it in marlin... but it will cost as it will simplify all shapes all the time...

Therefore, the more I zoom, the more details remain : the drawing needs more vertices, but many objets are out of view so the perf are quite constant with the zoom.

In my case, the time spent in the simplification pass (classical Douglas Peucker) is less than the time saved during the drawing.

For example, the full draw with all vertices is about 500 ms, and with a tolerance of 1px, the drawing takes about 150 ms only, including the simplification pass.

3x faster is awesome !
Do you use Marlin 0.8.2 with clipping support ?


I wonder if this kind of simplification could be done directly in Marlin, from a Shape object, before effectively drawing it.

Marlin has already a simplifier for collinear segments.
That would be a good idea to experiment an higher simplification but visual quality is important too.
Could you provide a code sample or image outputs at least ?

There are faster algorithms than Douglas Peucker.
Very often, simplification algorithms don't respect topology (after simplification, a polygone may intersect himslef) : topology-proof algorithms exist but are very slow : JTS library proposes this kind of algoritm.
But in the case I speak about, the tolerance is very low (in pixels) so that the simplification is almost invisible : even if a polygon intersects himself, it is not visible.

Have you ever considered this kind of optimisation in Marlin Renderer ?

Let's make it together for Marlin 0.9... 

Laurent

Andrea Aime

unread,
Nov 13, 2017, 4:35:10 AM11/13/17
to marlin-...@googlegroups.com
On Mon, Nov 13, 2017 at 10:31 AM, Laurent Bourgès <bourges...@gmail.com> wrote:
Marlin has already a simplifier for collinear segments.
That would be a good idea to experiment an higher simplification but visual quality is important too.
Could you provide a code sample or image outputs at least ?

Hi,
just dropping a quick comment here, in GeoTools/GeoServer we have that simplification, in our tests
Douglas Peuker was taking too much time reducing the benefit significantly, and we settled for
a delta-x/delta-y based approach, basically a point is removed if it's too close to the previous one.
The tolerance if memory serves me right is 0.75 pixels.

Cheers
Andrea

==
GeoServer Professional Services from the experts! Visit http://goo.gl/it488V for more information.
==

Ing. Andrea Aime
@geowolf
Technical Lead

GeoSolutions S.A.S.
Via di Montramito 3/A
55054  Massarosa (LU)
phone: +39 0584 962313
fax: +39 0584 1660272
mob: +39  339 8844549

http://www.geo-solutions.it
http://twitter.com/geosolutions_it

AVVERTENZE AI SENSI DEL D.Lgs. 196/2003

Le informazioni contenute in questo messaggio di posta elettronica e/o nel/i file/s allegato/i sono da considerarsi strettamente riservate. Il loro utilizzo è consentito esclusivamente al destinatario del messaggio, per le finalità indicate nel messaggio stesso. Qualora riceviate questo messaggio senza esserne il destinatario, Vi preghiamo cortesemente di darcene notizia via e-mail e di procedere alla distruzione del messaggio stesso, cancellandolo dal Vostro sistema. Conservare il messaggio stesso, divulgarlo anche in parte, distribuirlo ad altri soggetti, copiarlo, od utilizzarlo per finalità diverse, costituisce comportamento contrario ai principi dettati dal D.Lgs. 196/2003.

The information in this message and/or attachments, is intended solely for the attention and use of the named addressee(s) and may be confidential or proprietary in nature or covered by the provisions of privacy act (Legislative Decree June, 30 2003, no.196 - Italy's New Data Protection Code).Any use not in accord with its purpose, any disclosure, reproduction, copying, distribution, or either dissemination, either whole or partial, is strictly forbidden except previous formal approval of the named addressee(s). If you are not the intended recipient, please contact immediately the sender by telephone, fax or e-mail and delete the information in this message that has been received in error. The sender does not give any warranty or accept liability as the content, accuracy or completeness of sent messages and accepts no responsibility  for changes made after they were sent or for other risks which arise as a result of e-mail transmission, viruses, etc.


Botond Kósa

unread,
Nov 13, 2017, 12:19:42 PM11/13/17
to marlin-renderer
2017. november 10., péntek 9:57:11 UTC+1 időpontban Olivier Brault a következőt írta:
There are faster algorithms than Douglas Peucker.

Did you try Visvalingam-Whyatt simplification? It is costlier than Douglas-Peucker, but it has to be performed only once and the result can be used for all zoom levels. It assigns a "significance" value to each vertex that can be used to decide whether to draw or skip that vertex at a given zoom level.

Here is a simple demo:

Regards,
Botond

Olivier Brault

unread,
Nov 14, 2017, 4:20:55 AM11/14/17
to marlin-renderer
Hi.
It would really be a great joy to contribute to Marlin, but not in the few next months, I am overwhelmed for many reasons. Maybe in 2018.

I am also surprised by this 3x factor ...
I am using Marlin 0.8.2.
(What do you mean by clipping support : is this an option ?)
Currently, when the user zooms on the data, I use a QuadTree spatial index to select data to show. But if a polyline starts in the view and continue far out of the view, I don't cut it. Is that the aim of your "clipping support" ?

Maybe my 3x speedup is due to the fact that, in my case, drawing takes a lot of time.
Maybe Im doing something wrong with drawing.
Maybe with Marlin fine tuning, this speedup will decrease and simplification becomes less useful. I will try.

It is strange GeoServer don't see any improvement.
Maybe, if I set a 0.75 px tolerance instead of 1 px, my speedup factor will decrease too.
I admit I have not strict visual quality requirement as a GIS must have. In my case, 1px tolerance seems perfect.

In my case, when the user is draging the view, if the last drawing took more that 200ms, a quick mode is automatically used, with no AA, only during the drag : it is quite faster.
When the mouse is released, the real AA drawing is done again.
Even in this not-AA mode, DP simplification adds a 2x speedup.

I need to make some tests.


I can give you my DP simplification code, but it is quite "classical".
The only speed improvement I made if when giving Long/Lat coords to it, there is an option to make a quick approximate distance seg/pt computation, otherwise, a real "great circle" segment is used for the computation, but in this case the simplification is quite slow.

Concerning Visvalingam-Whyatt simplification, I will take a look.
In fact, all my data is kept in memory for performance, and adding metadata on each point bothers me (additionnal RAM use).

Thanks

Olivier

Olivier Brault

unread,
Nov 14, 2017, 7:45:03 AM11/14/17
to marlin-renderer
I've made a quick comparison : with and without DP simplification
I've captured 2 images, exactly in the same position and zoom, so that it is easy, in the Windows Photo viewer, to swap quickly between 2 pictures and see the differences.
There are indeed small and subtle visible differences when swapping like this, but it is almost impossible, IMHO, to tell which image is simplified when seing it without this "swap comparison".

(for information, these images represent the high voltage french electric network)

The first image is drawn with all the vertices (more than 1,1 million) : it takes 520 ms to draw, on average.
the snd image is drawn with 0.75 pixel tolerance DP simplification : only 13% of the vertices are drawn, it takes 160 ms to draw, on average.

Olivier
France nbPtTOTAL=1139516 nbPtDRAWN=100% duree=510 ms.png
France nbPtTOTAL=1139516 nbPtDRAWN=146419 (13%) duree=160 ms.png

Olivier Brault

unread,
Nov 14, 2017, 8:09:10 AM11/14/17
to marlin-renderer
Here is my DP simplification code.
(not usable directly becasue call other classes, but just for information)
I still have 4 additionnal performance enhancements to implement and to test ...

public class DouglasPeucker {

   
static private final double K_DEG2RAD = PI / 180.0;
   
static private final double K_M_BY_DEG = 40_075_000 / (2*PI);

   
// TODO PERF : test bounding box de la geom, si sous la tolerance, conserver directement que les points extrémité !
   
// potentielle utile uniquement pour les fort zoom arriere o ules minuscule géométries ...

   
// TODO PERF : voir si la prise en compte des Z, via un facteur multiplicateur 2 ou 3 sur l'indice de parcours
   
//   du tableau de coord flat n'est pas optimisable
   
   
// TODO PERF : renvoyer un AGeom spécial qui est une vue simplifiée du geom en entrée !
   
//  -> petite indirection pour récupérer les coord, mais compensée par une plus faible
   
//  gestion mémoire
   
   
// TODO PERF : plutot que de gérer une liste d'indice de point à conserver, qu'il faut ensuite trier
   
//  essayer de gérer un bit array et le reparcourir à la fin pour constituer la liste d'indices
   
//  sinon, tester avec le tri radix de FastUtil qui peut etre plus rapide
   
   
   
// Simplifie l'objet AGeom fourni
   
// Peut renvoyer l'objet geom lui-même si la simplification conserve tous les points
   
// Gère correctement les AGeomMulti : sépare les géométries, les simplifie et reconstitue le tout
   
// Fonctionne avec des geom 2D ou 3D
   
//  mais ne simplifie que sur le critère de distance 2D (X et Y)
   
// bLongLatWGS84 indique que les coordonnées sont des longitude/latitude -> calcul de distance particulier
   
//  sinon, l'algo considère que les coordonnées sont projetées et dans un référentiel orthonormé !
   
// bLongLatFastApprox permet, dans le cas ou bLongLatWGS84=true, d'accélérer le calcul de distance en faisant une approximation
   
public static AGeom douglasPeuckerReduction_global (
           
AGeom geom, double tolerance,
           
boolean bLongLatWGS84, boolean bLongLatFastApprox) {

       
int nbSousGeom = geom.getNbSousGeom();

       
if (nbSousGeom==1) {
           
// géometrie de type Single
            I_DoubleArrayListRead vCoord
= geom.getSousGeom(0).getCoordsFlat();
           
TIntArrayList indicesToKeep = douglasPeuckerReduction_global_indices(vCoord, tolerance, geom.hasZ(), bLongLatWGS84, bLongLatFastApprox);
           
return geom.extract(indicesToKeep);
       
}

       
// plusieurs sous-géométries : on les traite une par une, indépendamment
       
ArrayList<AGeom> listGeom = new ArrayList<>(nbSousGeom);
       
for (int i=0 ; i<nbSousGeom ; i++) {
           
AGeomSingle ssGeom = geom.getSousGeom(i);
            I_DoubleArrayListRead vCoord
=  ssGeom.getCoordsFlat();
           
TIntArrayList indicesToKeep = douglasPeuckerReduction_global_indices(vCoord, tolerance, geom.hasZ(), bLongLatWGS84, bLongLatFastApprox);
            listGeom
.add(ssGeom.extract(indicesToKeep));
       
}
       
AGeom res = AGeom.createGeomMulti(listGeom);
       
return res;
   
}


   
// Peut renvoyer vCoordFlat lui-même si la simplification conserve tous les points
   
// bHasZ true -> les coordonnées sont des triplets, mais le traitement reste sur les x,y !
   
public static MyTDoubleArrayList douglasPeuckerReduction_global (
           
MyTDoubleArrayList vCoordFlat, double tolerance,
           
boolean bHasZ, boolean bLongLatWGS84, boolean bLongLatFastApprox) {

       
TIntArrayList indicesToKeep = douglasPeuckerReduction_global_indices(vCoordFlat, tolerance, bHasZ, bLongLatWGS84, bLongLatFastApprox);

       
if (indicesToKeep==null) {
           
// on conserve tous les points
           
return vCoordFlat;
       
}

       
// A partir des points conservés par l'algo, on reconstitue une liste de points
       
MyTDoubleArrayList vCoordResult = new MyTDoubleArrayList(indicesToKeep.size()*2);

       
int nbPointToKeep = indicesToKeep.size();

       
if (bHasZ) {
           
for (int i=0 ; i<nbPointToKeep ; i++) {
               
int index = 3*indicesToKeep.get(i);
                vCoordResult
.add(vCoordFlat.getQuick(index));
                vCoordResult
.add(vCoordFlat.getQuick(index+1));
                vCoordResult
.add(vCoordFlat.getQuick(index+2));
           
}
       
} else {
           
for (int i=0 ; i<nbPointToKeep ; i++) {
               
int index = 2*indicesToKeep.get(i);
                vCoordResult
.add(vCoordFlat.getQuick(index));
                vCoordResult
.add(vCoordFlat.getQuick(index+1));
           
}
       
}
       
return vCoordResult;
   
}

   
// Fonctionne avec des coord flat 2D ou 3D, dans ce cas, mettre bHasZ = TRUE
   
// Renvoie les indices (0 based) des points à conserver
   
// renvoie null s'il faut conserver tous les points
   
// les indices renvoyés sont triés
   
public static TIntArrayList douglasPeuckerReduction_global_indices (
            I_DoubleArrayListRead vCoordFlat
, double tolerance,
           
boolean bHasZ, boolean bLongLatWGS84, boolean bLongLatFastApprox) {

       
int nbPoint = bHasZ ? vCoordFlat.size()/3 : vCoordFlat.size()/2;
       
if (nbPoint <= 2) {
           
return null;
       
}

       
// OPTIMISATION
       
if (bLongLatWGS84 && bLongLatFastApprox) {
           
// mega approx : on convertit en pseudo metres en suposant que les données sont tres localisées pour permettre cette approx !
           
// cette approx ne fonctionne en l'état que s'il n'y a pas plus de  Pi d'écart entre les min/max des angles
           
// notamment, si certain angle sont à 1° et d'aurte à 359°, le calcul n'a plus de sens
           
//  -> il faudrait alors convertir le 359° en -1°

           
// si hasZ :
           
// comme on crée une list temp, on ne met que les x,y dedans, pas besoin de mettre un z inutile ...
           
int nbPt = bHasZ ? vCoordFlat.size()/3 : vCoordFlat.size()/2;
           
MyTDoubleArrayList vCoordLL = new MyTDoubleArrayList(nbPt*2);

           
double lg_rad_min = Double.POSITIVE_INFINITY;
           
double lg_rad_max = Double.NEGATIVE_INFINITY;
           
double lt_rad_min = Double.POSITIVE_INFINITY;
           
double lt_rad_max = Double.NEGATIVE_INFINITY;


           
int multIndex = bHasZ ? 3 : 2;
           
for (int i=0 ; i<nbPt ; i++) {
               
int index = multIndex*i;
               
double lg_rad = vCoordFlat.getQuick(index++) * K_DEG2RAD;
               
double lt_rad = vCoordFlat.getQuick(index) * K_DEG2RAD;

               
if (lg_rad > lg_rad_max) lg_rad_max = lg_rad;
               
if (lg_rad < lg_rad_min) lg_rad_min = lg_rad;
               
if (lt_rad > lt_rad_max) lt_rad_max = lt_rad;
               
if (lt_rad < lt_rad_min) lt_rad_min = lt_rad;

               
double x = lg_rad * K_M_BY_DEG * Math.cos(lt_rad);
               
double y = lt_rad * K_M_BY_DEG;
                vCoordLL
.add(x);
                vCoordLL
.add(y);
           
}

           
// check ecart des angles
           
if (Math.abs(lg_rad_max-lg_rad_min) > PI) {
               
throw new MyException("Le mode LongLatFastApprox n'est pas possible car l'écart entre la longitude min et max est > 180° : min = " + (lg_rad_min/K_DEG2RAD) + "°  max = " + (lg_rad_max/K_DEG2RAD) + "°");
           
}
           
if (Math.abs(lt_rad_max-lt_rad_min) > PI) {
               
throw new MyException("Le mode LongLatFastApprox n'est pas possible car l'écart entre la latitude min et max est > 180° : min = " + (lt_rad_min/K_DEG2RAD) + "°  max = " + (lt_rad_max/K_DEG2RAD) + "°");
           
}


           
// remplacement des variables ...
            vCoordFlat
= vCoordLL;
            bLongLatWGS84
= false;
            bHasZ
= false;
       
}
       
// sinon soit on est en mètre, soit on est en Lg/Lt mais on ne veut pas l'approximation


       
int firstPoint = 0;
       
int lastPoint = nbPoint-1;

       
TIntArrayList vIndexPointToKeep = new TIntArrayList(nbPoint);
       
//vIndexPointToKeep.clear();

       
// On ajoute le 1er et le dernier point
        vIndexPointToKeep
.add (firstPoint);
        vIndexPointToKeep
.add (lastPoint);

        douglasPeuckerReduction_recurs
(vCoordFlat, firstPoint, lastPoint, tolerance, bHasZ, bLongLatWGS84, vIndexPointToKeep);

       
if (vIndexPointToKeep.size() == nbPoint) {
           
// aucune simplif réalisée !
           
return null;
       
}

       
// Attention l'appel précédent renvoi un vecteur d'index non trié !
        vIndexPointToKeep
.sort();
       
return vIndexPointToKeep;
   
}


   
public static void douglasPeuckerReduction_recurs (
            I_DoubleArrayListRead vCoord
, int firstPoint, int lastPoint, double tolerance, boolean bHasZ, boolean bLongLatWGS84,
           
TIntArrayList vIndexPointToKeep) {

       
double distMax_sq = 0;
       
int indexFarthest = 0;

       
// Fin de la récursion
       
if (firstPoint>=lastPoint-1) {
           
return;
       
}

       
// Le 1er et le dernier point ne peuvent être identiques
       
while (arePointEqual(vCoord, bHasZ, firstPoint, lastPoint)) {
            lastPoint
--;
            vIndexPointToKeep
.add (lastPoint);
           
if (firstPoint>=lastPoint-1) {
               
return;
           
}
       
}

       
int multIndex = bHasZ ? 3 : 2;

       
// Recherche de l'index du point le plus loin du segment [firstPoint, lastPoint]
       
double xA, yA, xB, yB, xI, yI;
        xA
= vCoord.getQuick(firstPoint*multIndex);
        yA
= vCoord.getQuick(firstPoint*multIndex+1);
        xB
= vCoord.getQuick(lastPoint*multIndex);
        yB
= vCoord.getQuick(lastPoint*multIndex+1);

       
for (int i=firstPoint+1 ; i<lastPoint ; i++) {
            xI
= vCoord.getQuick(i*multIndex);
            yI
= vCoord.getQuick(i*multIndex+1);

           
double dist_sq;
           
if (bLongLatWGS84) {
                dist_sq
= UtilsGeom.getDistPointSegWGS84(xI,yI, xA,yA, xB,yB);
                dist_sq
= dist_sq*dist_sq;
           
} else {
                dist_sq
= UtilsGeom.getDistPointSegSq(xI,yI, xA,yA, xB,yB);
           
}

           
if (dist_sq>distMax_sq) {
                distMax_sq
= dist_sq;
                indexFarthest
= i;
           
}
       
}

       
// Si ce point le plus loin dépasse la tolérence, on le conserve et on appelle l'algo sur les 2 bouts de lignes autour.
       
if (distMax_sq>(tolerance*tolerance) && indexFarthest!=0) {
            vIndexPointToKeep
.add(indexFarthest);
            douglasPeuckerReduction_recurs
(vCoord, firstPoint, indexFarthest, tolerance, bHasZ, bLongLatWGS84, vIndexPointToKeep);
            douglasPeuckerReduction_recurs
(vCoord, indexFarthest, lastPoint, tolerance, bHasZ, bLongLatWGS84, vIndexPointToKeep);
       
}
   
}

   
private static boolean arePointEqual(I_DoubleArrayListRead vCoord, boolean bHasZ, int index1, int index2) {
       
int multIndex = bHasZ ? 3 : 2;

       
double x1 = vCoord.getQuick(index1*multIndex);
       
double y1 = vCoord.getQuick(index1*multIndex+1);
       
double x2 = vCoord.getQuick(index2*multIndex);
       
double y2 = vCoord.getQuick(index2*multIndex+1);

       
return UtilsData.equals_double_notNull(x1, x2) && UtilsData.equals_double_notNull(y1, y2);
   
}
}


and for information, the code to compute dist pt/segment :

public static double getDistPointSegSq(double xP, double yP,
               
double xA, double yA,
               
double xB, double yB) {

       
double dx = xB - xA;
       
double dy = yB - yA;

       
if (dx != 0 || dy != 0) {

           
double t = ((xP - xA) * dx + (yP - yA) * dy) / (dx * dx + dy * dy);

           
if (t > 1) {
                xA
= xB;
                yA
= yB;

           
} else if (t > 0) {
                xA
+= dx * t;
                yA
+= dy * t;
           
}
       
}

        dx
= xP - xA;
        dy
= yP - yA;

       
return dx * dx + dy * dy;
   
}

public static double getDistPointSegWGS84(double xP, double yP,
           
double xA, double yA,
           
double xB, double yB) {

       
double hP = 0;

       
double lata = Math.toRadians(yA);
       
double lnga = Math.toRadians(xA);
       
double latb = Math.toRadians(yB);
       
double lngb = Math.toRadians(xB);
       
double latp = Math.toRadians(yP);
       
double lngp = Math.toRadians(xP);
       
double sinlata = Math.sin(lata);
       
double coslata = Math.cos(lata);
       
double sinlnga = Math.sin(lnga);
       
double coslnga = Math.cos(lnga);
       
double sinlatb = Math.sin(latb);
       
double coslatb = Math.cos(latb);
       
double sinlngb = Math.sin(lngb);
       
double coslngb = Math.cos(lngb);
       
double sinlatp = Math.sin(latp);
       
double coslatp = Math.cos(latp);
       
double sinlngp = Math.sin(lngp);
       
double coslngp = Math.cos(lngp);
       
double costh = sinlata*sinlatb
               
+ coslata*coslatb*(coslnga*coslngb+sinlnga*sinlngb);
       
double sin2th = 1-costh*costh;
       
if (sin2th < 1.0E-8) {
           
// a and b are very close; return distance from a to p
           
double costhp = sinlata*sinlatp
                   
+ coslata*coslatp*(coslnga*coslngp+sinlnga*sinlngp);
           
return FastMath.acos(costhp)*(R+hP);
       
}
       
double num = sinlata*(coslatb*coslatp*coslngb*sinlngp
               
- coslatb*coslatp*sinlngb*coslngp)
           
+ coslata*coslnga*(coslatb*sinlatp*sinlngb
               
- sinlatb*coslatp*sinlngp)
           
+ coslata*sinlnga*(sinlatb*coslatp*coslngp
               
- coslatb*sinlatp*coslngb);
       
double sinr = Math.abs(num)/Math.sqrt(sin2th);
       
return (R+hP)*FastMath.asin(sinr);
   
}




Andrea Aime

unread,
Nov 14, 2017, 8:19:27 AM11/14/17
to marlin-...@googlegroups.com
Hi Olivier,
I see you are keeping all the data in memory, and it's high resolution... that's not a use case we have in GeoServer,
where normally tens (to hundred of thousands) of layers need to be served (and memory comes at a premium, 
as it needs to be split among many concurrent requests all allocating drawing surfaces in memory).

So typically the datasets are resolution dependent, one high resolution as yours would display only when fairly
zoomed in, and have some smaller replacement at lower scales.
As a result, the simplification does not manage to remove as many points, topically it's 1/10 or 1/100.

That does not mean memory caching is banned from GeoServer, but nobody has come forth with some funding
to implement it as an optional behavior :-)

Cheers
Andrea


--
You received this message because you are subscribed to the Google Groups "marlin-renderer" group.
To unsubscribe from this group and stop receiving emails from it, send an email to marlin-renderer+unsubscribe@googlegroups.com.
To post to this group, send email to marlin-renderer@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--

Regards,

Andrea Aime

Olivier Brault

unread,
Nov 14, 2017, 11:03:26 AM11/14/17
to marlin-renderer
Hi Andrea,

In fact, As you explain it well, my need is quite different from the one af o GIS server ...
I need ultra-responsive reaction of the tool, whatever the user does, or compute.

I also use zoom dependant display of some layers (for layers with millions of points, I use a zoom level for triggering display)
But as I need everything in memory, I don't handle multi-resolution layer, I prefer to modify the resolution when drawing.

Example of what does my tool :
For instance, the picture shown before are made from a shapefile (30 MB, 8800 objects for 1.1 millions vertices, 16 attributes per object) :
  • is loaded from disk and "decoded" in 200 ms
  • then transformed into a drawing "pipeline" (list of drawing action, with color/stroke from attributes, etc.) in 100 ms
  • then interactive drawing with zoom appears to the user and must also be as responsive as possible
In that example, I don't want to create multi res layer after loading the shapefile, I also don't want the user to handle such data management, it must be transparent for him.
The simplest way I found is to simplify the data on the fly, as it is drawn.

Olivier



Le mardi 14 novembre 2017 14:19:27 UTC+1, Andrea Aime a écrit :
Hi Olivier,
I see you are keeping all the data in memory, and it's high resolution... that's not a use case we have in GeoServer,
where normally tens (to hundred of thousands) of layers need to be served (and memory comes at a premium, 
as it needs to be split among many concurrent requests all allocating drawing surfaces in memory).

So typically the datasets are resolution dependent, one high resolution as yours would display only when fairly
zoomed in, and have some smaller replacement at lower scales.
As a result, the simplification does not manage to remove as many points, topically it's 1/10 or 1/100.

That does not mean memory caching is banned from GeoServer, but nobody has come forth with some funding
to implement it as an optional behavior :-)

Cheers
Andrea

On Tue, Nov 14, 2017 at 1:45 PM, Olivier Brault <o.br...@gmail.com> wrote:
I've made a quick comparison : with and without DP simplification
I've captured 2 images, exactly in the same position and zoom, so that it is easy, in the Windows Photo viewer, to swap quickly between 2 pictures and see the differences.
There are indeed small and subtle visible differences when swapping like this, but it is almost impossible, IMHO, to tell which image is simplified when seing it without this "swap comparison".

(for information, these images represent the high voltage french electric network)

The first image is drawn with all the vertices (more than 1,1 million) : it takes 520 ms to draw, on average.
the snd image is drawn with 0.75 pixel tolerance DP simplification : only 13% of the vertices are drawn, it takes 160 ms to draw, on average.

Olivier

--
You received this message because you are subscribed to the Google Groups "marlin-renderer" group.
To unsubscribe from this group and stop receiving emails from it, send an email to marlin-render...@googlegroups.com.
To post to this group, send email to marlin-...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Laurent Bourgès

unread,
Nov 16, 2017, 4:03:15 AM11/16/17
to marlin-...@googlegroups.com
Olivier & Andrea,

I read the full thread and it now contains lots of information: thanks !

You picked my curiosity so I will try implementing my own basic simplifier (radial dist threshold):

It is just adding another stage in marlin to ignore useless line AND curve segments: so easy as it does need any storage (just test the current point vs previous one).

It will be more efficient than DP: no intermediate stack nor allocation...

Stay tuned,
Laurent

Michael Paus

unread,
Nov 16, 2017, 9:09:17 AM11/16/17
to marlin-...@googlegroups.com
Hi,
I'd also like to contribute my two cents to this discussion. Although I am also a big fan of
geometry simplifications in order to increase performance I still want to throw in a few
words of caution.

I once had to display some geometry like this:

https://www.sky-map.de/images/ipgafor.png

The original data was very fine-grained so for efficient rendering some simplification was
necessary. The problem though was that the region outlines were modeled as polygons
and not as connected line strings. So many polygon sections were redundant. When you
now start to automatically simplify these polygons most algorithms end up with different
solutions for previously identical sections and this really looks ugly. This depends of course
on how aggressive you simplify and which algorithm you use.

So my personal conclusion is that the introduction of some automatic simplification will at
least need some mechanism to activate or deactivate it. Maybe something like a rendering hint.

As I said, just my 2ct.

Michael


Am 16.11.17 um 10:03 schrieb Laurent Bourgès:
--

Laurent Bourgès

unread,
Nov 16, 2017, 12:06:13 PM11/16/17
to marlin-...@googlegroups.com
Hi mickael,


Le 16 nov. 2017 3:09 PM, "Michael Paus" <m...@jugs.org> a écrit :
Hi,
I'd also like to contribute my two cents to this discussion. Although I am also a big fan of
geometry simplifications in order to increase performance I still want to throw in a few
words of caution.

I once had to display some geometry like this:

https://www.sky-map.de/images/ipgafor.png

The original data was very fine-grained so for efficient rendering some simplification was
necessary. The problem though was that the region outlines were modeled as polygons
and not as connected line strings. So many polygon sections were redundant. When you
now start to automatically simplify these polygons most algorithms end up with different
solutions for previously identical sections and this really looks ugly. This depends of course
on how aggressive you simplify and which algorithm you use.

Yes ideally topology preserving algorithms are better...
Maybe quantization coordinates helps but I am not an expert.

So my personal conclusion is that the introduction of some automatic simplification will at
least need some mechanism to activate or deactivate it. Maybe something like a rendering hint.

Yes I made a ultra simple PathSimplifier:
- new flag usePathSimplifier true/false
- tolerance = 0.125 px by default as it corresponds to 1/8th pixel ie 1 subpixel distance
- only test radial distance (fastest)

I tested with tolerance= 10, 2, 0.75, 0.5, 0.25, 0.01px:
- if > 1px, I confirm it looks ugly: filled shapes are not enclosed by stroked shapes (borders)
- 0.5px reduces a bit the edge count without noticeable artefacts: 0.003 % rms

Performance is the same +/- few percents (low overhead) as most my rendering cases are 'simple' enough = most were processed by geoserver then are already simplified.

Will push to github asap,
Laurent

Olivier Brault

unread,
Nov 17, 2017, 3:19:41 AM11/17/17
to marlin-renderer
Hi all,

In fact, I use aggressive DP because I know that even with topology error, it will be almost invisible : I don't make any computation or geometric algorithm on simplified data so I don't care of topology errors.
So I agree to let the user to desactivate or to tune finely the tolerance.

I think that even with topology proof alogorithm, there will still be problems with adjacent polygons.
Indeed, these algorithms guarantee you don't have self intersecting shape : but they consider shapes independantly.

In the past, for another project, I implemented an algorithm of simplification that garanteed that so intersection at all where created during simplification.
But even with that, adjacent polygon may still have small holes between them ...

The only real solution is to have a topology description of the géometries, like in TopoJSON format.

So : here the pragmatic solution is to set a very low tolerance to not bother about all this.

Laurent : you solution with simple radial dist threshold could be insufficient in case of many long straight lines.
But as you already have a colinear simplification, the combination of both can be efficient, I think : is there a tunable tolerance of the angle ?

-> for information, can you print the stats of the simplification ratio made by Marlin ? (like total number of vertices before and after, and then the % of reduction) ?


As a matter of fact, when you are fed by a tool (like Geoserver) which already simplify geometries, you may have almost no gain.
--> I will try your algo with my full res data, and I am sure I will see improvements.
for info, I tried 0.125 px tolerance and have still only 16% of vertices to draw (200 ms compared to >500ms without simplification)

I also will make tests on a better computer too : the one I use everyday is a banal laptop with poor graphics capabilities.
Maybe on a powerful computer, the drawing time will be reduce and then the need of simplification less usefull, I don't know ...

Olivier

Laurent Bourgès

unread,
Nov 17, 2017, 3:21:30 AM11/17/17
to marlin-...@googlegroups.com
Olivier,

1. I pushed last night the new PathSimplifier for both float/double Marlin pipelines:
https://github.com/bourgesl/marlin-renderer/commit/0cca81dd1040eae6c687d7ee3eb5cf333cca6c75


Of course, you should clone the code & build marlin yourself:
mvn clean install

Olivier & anybody else, Please test it in your use case but it requires you to discard your own Douglas Pecker simplifier but keep lat/long coordinate conversion into Shape's pixel coordinates.

To enable the path simplifier and define the distance tolerance (pixel):
JAVA_OPTS="-Dsun.java2d.renderer.usePathSimplifier=true -Dsun.java2d.renderer.pathSimplifier.pixTol=0.5 $JAVA_OPTS"

2. I will also try implementing coordinate quantization (1/8th pixel = 1 subpixel) ... to have a global (stroker / filler) simplifier that preserves topology:

Bye,
Laurent

Laurent Bourgès

unread,
Nov 17, 2017, 3:40:16 AM11/17/17
to marlin-...@googlegroups.com
Olivier,

In fact, I use aggressive DP because I know that even with topology error, it will be almost invisible : I don't make any computation or geometric algorithm on simplified data so I don't care of topology errors.
So I agree to let the user to desactivate or to tune finely the tolerance.

Done.
 
I think that even with topology proof alogorithm, there will still be problems with adjacent polygons.
Indeed, these algorithms guarantee you don't have self intersecting shape : but they consider shapes independantly.

In the past, for another project, I implemented an algorithm of simplification that garanteed that so intersection at all where created during simplification.
But even with that, adjacent polygon may still have small holes between them ...

The only real solution is to have a topology description of the géometries, like in TopoJSON format.

Apparently quantization preserves 'some topology rules' ... let's try soon (next week ?)
 

So : here the pragmatic solution is to set a very low tolerance to not bother about all this.

Yes: 0.5 or 0.25 pixel seems good: less points, small gain, no artefacts except few different (uncovered) pixels: rms error is invisible !
 
Laurent : you solution with simple radial dist threshold could be insufficient in case of many long straight lines.
But as you already have a colinear simplification, the combination of both can be efficient, I think : is there a tunable tolerance of the angle ?

I plan to rewrite the collinear simplifier to be faster and I will add tuning parameters as I would like to use the area error instead.
 

-> for information, can you print the stats of the simplification ratio made by Marlin ? (like total number of vertices before and after, and then the % of reduction) ?

Will do when I have some free time (doStats as usual).

Let me insist: please give me your Marlin internal statistics (doStats=true/false) to measure the gains offered by your D&P simplifier.
 
As a matter of fact, when you are fed by a tool (like Geoserver) which already simplify geometries, you may have almost no gain.
--> I will try your algo with my full res data, and I am sure I will see improvements.

Excellent, please give me your results.
 
for info, I tried 0.125 px tolerance and have still only 16% of vertices to draw (200 ms compared to >500ms without simplification)

I also will make tests on a better computer too : the one I use everyday is a banal laptop with poor graphics capabilities.
Maybe on a powerful computer, the drawing time will be reduce and then the need of simplification less usefull, I don't know

I have the most powerful laptop (i7-6800K 32Gb + nvidia quaddro + 4K screen - ubuntu 17.4): zbook G4 rocks !

PS: can you share (publicly or privately) your shape file ?
I improved a lot rendering performance in GVSig Community Edition (1.5 based) with Benjamin Ducke and I could draw your shape file with that FOSS tool to let me measure the zoom in / out effect with Marlin & collect my own statistics & make my own experiments.

Cheers,
Laurent

Olivier Brault

unread,
Nov 17, 2017, 3:57:58 AM11/17/17
to marlin-renderer

Just a small warning.
Quantization simplification could still make holes or overlaps if the adjacent edges have the same path BUT not the same number of points to describe these edges : it is in fact a very bad  (and not topologic) description of the geometries and a very rare case I think.
It could happen if the geometries had a first colinear simplification pass before the quantization simplification.


Olivier
1_BEFORE.png
2_AFTER.png

Olivier Brault

unread,
Nov 17, 2017, 5:14:28 AM11/17/17
to marlin-renderer
Hi Laurent,

I plan to rewrite the collinear simplifier to be faster and I will add tuning parameters as I would like to use the area error instead.
Another warning (based on a bad past experience ...) : if you plan to compute triangle areas, make sure you use a STABLE formula, with no numeric cancellations.
Sometimes, on very flat triangle or other strange configuration, instable formula can give a very bad result.
Look at this doc http://www.irisa.fr/sage/jocelyne/cours/precision/insa-1200.pdf
 slide 30, for more information


Let me insist: please give me your Marlin internal statistics (doStats=true/false) to measure the gains offered by your D&P simplifier.
 
==> see attached files : the reports are not the same length in both case. I have not made the same number of drawings in both case (I just use my app with pan and zoom, without counting the number of display refresh ...)

Olivier
Log with simplif 0.5px.txt
Log without simplif.txt
Reply all
Reply to author
Forward
0 new messages