Encoded polylines -- what's the new [best] alternative?

1,065 views
Skip to first unread message

René

unread,
May 26, 2010, 3:35:16 PM5/26/10
to Google Maps JavaScript API v3
While migrating some apps over to v3, I discovered that encoded
polylines are gone. That's kind of disappointing, since one of my
Google Maps applications uses them and, well, I like the feature.
Having an 8-core server pre-encode the polyline seems a lot smarter
than having to generate them programmatically in the v3 API -- unless
I'm missing something.

Can anyone suggest a best practice, performance-wise, for displaying
long, complex polylines? I mean, should I use something other than the
Douglas polyline encoder for computing them (well, I know I have to --
but in what format). And then, on the client-side, how should I
generate/display them?

...Rene

Ben Appleton

unread,
May 27, 2010, 2:37:59 AM5/27/10
to google-map...@googlegroups.com
On Thu, May 27, 2010 at 5:35 AM, René <renefo...@gmail.com> wrote:
> While migrating some apps over to v3, I discovered that encoded
> polylines are gone. That's kind of disappointing, since one of my
> Google Maps applications uses them and, well, I like the feature.
> Having an 8-core server pre-encode the polyline seems a lot smarter
> than having to generate them programmatically in the v3 API -- unless
> I'm missing something.

Just to be clear: v3 will compute level-of-detail for any polylines or
polygons that you create. This takes only a fraction of the total
time to render polylines or polygons.

> Can anyone suggest a best practice, performance-wise, for displaying
> long, complex polylines? I mean, should I use something other than the
> Douglas polyline encoder for computing them (well, I know I have to --
> but in what format). And then, on the client-side, how should I
> generate/display them?

I suggest just passing the LatLng coordinates to google.maps.Polyline
or google.maps.Polygon. If the performance turns out to be poor, then
I'd suggest using the Ramer-Douglas-Peucker algorithm to remove
unnecessary vertices on the server before sending the coordinates to
the client.

Ben

> ...Rene
>
> --
> You received this message because you are subscribed to the Google Groups "Google Maps JavaScript API v3" group.
> To post to this group, send email to google-map...@googlegroups.com.
> To unsubscribe from this group, send email to google-maps-js-a...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/google-maps-js-api-v3?hl=en.
>
>

SpoilsportMotors

unread,
May 27, 2010, 5:47:21 PM5/27/10
to Google Maps JavaScript API v3
On May 27, 2:37 am, Ben Appleton <apple...@google.com> wrote:

> I suggest just passing the LatLng coordinates to google.maps.Polyline
> or google.maps.Polygon.  If the performance turns out to be poor, then
> I'd suggest using the Ramer-Douglas-Peucker algorithm to remove
> unnecessary vertices on the server before sending the coordinates to
> the client.
>
> Ben

Unless I'm missing something, the purpose - or one of the purposes -
of encoding at v2 was that you would (in effect) run R-D-P on the
polyline at the various zoom levels, and then pass the polyline and
zoom levels to the API. At that point, the API would render only the
points in the polyline needed at the current zoom level. What you're
suggesting is that the server remove unnecessary vertices and then
send the polyline to the client, but then this would have to be
repeated as the user zoom in and out, generating a lot more network
traffic and server overhead than before. Furthermore, the encoded
points and levels were pretty compact compared to sending JSON arrays
of lat/lng. I'm failing to see how this is better - can you help
explain this to me?

Thanks,
Herb.

Ben Appleton

unread,
May 27, 2010, 7:21:41 PM5/27/10
to google-map...@googlegroups.com

The v2 encoding scheme did 2 things:
1 - Compress the coordinates to reduce bandwidth
2 - Optionally, include precomputed levels of detail, which the JS would use to render quickly.

Regarding 1: there are open-source libraries to encode and decode polylines to reduce bandwidth which you can use.

Regarding 2: V3 computes these levels in JS, so you do not need to compute them in your server.

Perhaps I confused the issue to mention running RDP in your server.  I mentioned this only as a first pass culling, to throw out vertices that users will never notice at any zoom level.  This effectively removes duplicate and redundant vertices to reduce bandwidth.  Don't try it unless you find it necessary.

On 28 May 2010 07:47, "SpoilsportMotors" <spoilspo...@gmail.com> wrote:

On May 27, 2:37 am, Ben Appleton <apple...@google.com> wrote:

> I suggest just passing the LatLng c...

Unless I'm missing something, the purpose - or one of the purposes -
of encoding at v2 was that you would (in effect) run R-D-P on the
polyline at the various zoom levels, and then pass the polyline and
zoom levels to the API. At that point, the API would render only the
points in the polyline needed at the current zoom level. What you're
suggesting is that the server remove unnecessary vertices and then
send the polyline to the client, but then this would have to be
repeated as the user zoom in and out, generating a lot more network
traffic and server overhead than before. Furthermore, the encoded
points and levels were pretty compact compared to sending JSON arrays
of lat/lng. I'm failing to see how this is better - can you help
explain this to me?

Thanks,
Herb.


--

You received this message because you are subscribed to the Google Groups "Google Maps JavaScript AP...

SpoilsportMotors

unread,
May 28, 2010, 9:25:20 AM5/28/10
to Google Maps JavaScript API v3
On May 27, 7:21 pm, Ben Appleton <apple...@google.com> wrote:
> The v2 encoding scheme did 2 things:
> 1 - Compress the coordinates to reduce bandwidth
> 2 - Optionally, include precomputed levels of detail, which the JS would use
> to render quickly.
>
> Regarding 1: there are open-source libraries to encode and decode polylines
> to reduce bandwidth which you can use.
>
> Regarding 2: V3 computes these levels in JS, so you do not need to compute
> them in your server.

Mmm. We'd been using the bejeebers out of both 1 and 2 at v2.

I can either compress the data myself, or enable gzip on JSON (which I
think is fraught with peril of its own) to resolve (1). It's slightly
irksome, but not hideous.

With respect to (2), we'd be been using RDP at the server to optimally
encode the polylines / polygons for each zoom level, which had the
nice side-effect of throwing out points that wouldn't be seen, even at
zoom level 20. While I appreciate that V3 will now do that calculation
for me, it would be more efficient if I could pre-compute them *once*
on the server side, resulting in less work every time a client asks
for the polyline or polygon in question. Is this really closed off
forever - Google is never going to allow pre-computed polylines?

Herb.

Ben Appleton

unread,
May 28, 2010, 6:32:55 PM5/28/10
to google-map...@googlegroups.com

Not 'closed off forever', but check the actual performance: do you need this feature?

On 28 May 2010 23:25, "SpoilsportMotors" <spoilspo...@gmail.com> wrote:

On May 27, 7:21 pm, Ben Appleton <apple...@google.com> wrote:

> The v2 encoding scheme did 2 things:...

Mmm. We'd been using the bejeebers out of both 1 and 2 at v2.

I can either compress the data myself, or enable gzip on JSON (which I
think is fraught with peril of its own) to resolve (1). It's slightly
irksome, but not hideous.

With respect to (2), we'd be been using RDP at the server to optimally
encode the polylines / polygons for each zoom level, which had the
nice side-effect of throwing out points that wouldn't be seen, even at
zoom level 20. While I appreciate that V3 will now do that calculation
for me, it would be more efficient if I could pre-compute them *once*
on the server side, resulting in less work every time a client asks
for the polyline or polygon in question. Is this really closed off
forever - Google is never going to allow pre-computed polylines?

Herb.


--

Chad Killingsworth

unread,
May 28, 2010, 6:47:14 PM5/28/10
to Google Maps JavaScript API v3
> I can either compress the data myself, or enable gzip on JSON (which I
> think is fraught with peril of its own) to resolve (1).

I'm gzipping my JSON. Why would this be a problem?

Chad Killingsworth

SpoilsportMotors

unread,
May 30, 2010, 6:25:44 PM5/30/10
to Google Maps JavaScript API v3
I'd had browser support issues at one point; I suspect that was
probably IE 6, which we're all mercifully ignoring at this point.

So... I'm off to send raw points - gzipped - to the v3 API and measure
the difference from that what we're getting on v2 with encoded
polygons.

I'll reply back whenever I get the couple hours put together to pull
this off.

Thanks for your help, everyone.

Herb.
Reply all
Reply to author
Forward
0 new messages