I'm not arguing that Chicago should be at the top of the list, and their 30mph speed limit is a big problem, but does anybody really think that it's a worse place to ride a bike than Jacksonville, Houston, Phoenix, San Antonio, and Los Angeles?
Here are my specific problems with their methodology:
1. They use OpenStreetMap data to determine cities' bike networks. This is likely the best option available to them (at least, without doing a massive amount of work or paying a massive amount of money), but it's still a problem. OSM mapping information for bike routes is often incomplete, incorrect, or - at best - inconsistently tagged. That last one is really the biggest and most pervasive problem; there are multiple different ways to identify bike routes in OSM and best practices have changed over time. That means that different tools will interpret the same data differently - to see what I mean, check out how differently Ann Arbor renders on
cyclosm.org vs
opencyclemap.org vs, say,
ridewithgps.com. (OpenCycleMap in particular is erroneously interpreting some ordinary sidewalks as high-quality bike routes for reasons that are ... complicated and difficult to unwind). I don't know exactly which types of tagged routes the BNA tool is including, but almost any set of choices will have imperfections.
This past winter, I actually spent some time re-tagging a bunch of OSM bike routes in Ann Arbor to try to make them more consistent with the city's own map (
https://a2gov.org/a2Transportation) and my personal experience. I strongly suspect this is actually why our PeopleForBikes score jumped several points between 2023 and 2024...
2. As far as I can tell, they are relying
entirely on posted speed limits to determine "high stress" vs" low stress" routes - not actual speeds, or street design, or lane widths, or any of the other good things that they're talking about in interviews. That throws out a ton of context that can make a world of difference as to whether a street is
actually a pleasant and safe place to ride a bike. Admittedly again, most of those other factors are ones for which high-quality data (or even like ... medium-quality data) does not consistently exist. That said, one thing I believe they
should incorporate is traffic volume data. Ann Arbor's comprehensive transportation plan considers
both speed limit and traffic volume to determine the appropriate intervention for all-ages-and-abilities bike routes, and I believe this approach is correct. And our region, at least, does have fairly comprehensive traffic-volume estimates for everything that's more major than a neighborhood street:
https://maps.semcog.org/TrafficVolume/.
Again, I do in some ways appreciate what they're trying to do here. I think the bike network analysis tool they've built (and to their credit, it's open-source:
https://github.com/PeopleForBikes/brokenspoke-analyzer) could be genuinely useful for benchmarking local progress and for scenario planning. But I really think it's irresponsible of them to use this approach to generate a total-ordered list of city rankings (with some results that are
clearly divorced from people's actual experiences), and blast them out such that they're used to generate endless clickbait stories in the media...
I actually do think the League's "bicycle friendly community" ratings are a somewhat better approach, though certainly not without flaws of their own.
- Adam