520 backup Flickr - Oran ViriyincyFor decades, we’ve been hearing that congestion is getting worse and worse, and that traffic backups are taking a bigger and bigger bite out of our days (and paychecks) each year. 

But what if we’re measuring congestion wrong—and the conventional wisdom is largely hogwash?

A geekalicious new report from CEOs for Cities shows that the nation’s most influential measure of congestion—the Urban Mobility Report by the Texas Transportation Institute, which is where reporters and policymakers get most of their information about the nationwide impact of congestion—is riddled with conceptual problems, data limitations, and methodological errors that render its city-to-city congestion rankings almost meaningless. 

Even worse, CEOs for Cities says, the congestion rankings are systematically biased against compact cities with short commute distances.  And on top of that, they likely overstate the impacts of congestion, both on people’s time and on the economy as a whole.

If you’re a transportation geek, the whole report is worth a read.  But here’s the nickel summary (we read, so you don’t have to!!)…

  • Give today to help Sightline reach our goal of $100,000!

    Thanks to Scott Rice & Anne Nelson for supporting a sustainable Cascadia.


    $77,000

  • CEOs for Cities’ main critique is that the Urban Mobility Report’s most-publicized congestion statistic—the “Travel Time Index,” which represents the ratio of travel time during rush hour vs. off-peak times—is inherently, mathematically biased against cities with short commute distances.  In other words, the UMR’s congestion rankings put compact cities, with jobs close to housing, at an automatic disadvantage to sprawling, dispersed cities.

    Here’s why.  Consider two hypothetical cities, Sprawlville and Compact City.  In Sprawlville, people travel a long way to work—an average of 20 miles door to door.  In free-flowing traffic, the trip would take 20 minutes, but it takes 10 extra minutes during rush hour, for a total commute of 30 minutes.  In Compact City, people don’t have to travel as far:  it’s just 10 miles from home to work on average; the trip takes 10 minutes off-peak, and 10 extra minutes during rush hour, for a total of 20 minutes. 

    In this example, congestion slows commutes by the same amount—10 minutes—in both cities. Sprawlville residents wind up with longer total commutes, since residents travel longer distances.   Yet the “Time Travel Index” shows that Compact City has a worse rush hour!!  That’s because the Time Travel Index shows a 2:1 ratio (i.e., 20 minutes vs. 10 minutes) for rush hour vs. off-peak travel in Compact City, and a 3:2 ratio (i.e., 30 minutes vs. 20 minutes) in Sprawlville. 

    In short, Sprawlville is a worse place to commute overall, and its total congestion delays are identical with Compact City—yet the Urban Mobility Report ranks Compact City as being a far worse place to commute.

    This concern is far more than theoretical:  the report finds ample evidence that the Urban Mobility Report’s rankings really are biased in favor of cities with long commute distances.  Take, for instance, the comparison of Chicago with Charlotte, NC.  Commuters face nearly identical delays in the two cities; Charlotte has longer commute distances, and hence longer total commute times; yet the Travel Time Index ranks Chicago as having the worse rush hour.

    Chicago vs. Charlotte commute

    But wait, there’s more!  The CEOs for Cities report also finds plenty of other flaws in the Urban Mobility Report (UMR):

    • Compared with other, more data-rich estimates of congestion, the UMR overstates rush-hour delays by 70 percent.
    • The UMR’s estimates of the fuel impacts of congestion aren’t grounded in solid data, and likely overstate how much fuel is wasted by traffic congestion
    • The UMR’s models of the relationship between travel volumes and travel speeds are poorly grounded in actual data.
    • The UMR’s assessment of changes in congestion over time—which are mostly based on models that extrapolate from a limited set of data—don’t match up with other data sources, such as the US Census and the National Household Travel Survey.
    • Looking at the data over time, the bulk of additional rush hour delays since 1982 have resulted from longer-distance commutes, NOT from worsening problems with congestion per se.  (This nifty infographic helps tells the story.)

    That’s a lot of strikes against the UMR.  But the good news in all of this is that, as far as I can tell, the researchers who publish the UMR seem committed to transparency, and to ensuring that their measurements change as congestion phenomena are better understood.  They’re already updating their methods to incorporate new, richer data sources.  And they’ve changed their methods in the past to accommodate other reasonable critiques, including those leveled by the Washington State Department of Transportation. 

    So there is ample reason to believe that, if the CEOs for Cities critiques hold water, the Texas Transportation Institute could take them to heart—creating an opportunity for new, genuinely useful measures of urban mobility.  But if for some reason the Urban Mobility Report doesn’t get a needed facelift…well, you can be sure that some people will be watching the results very closely.

    Traffic photo courtesy of Flickr user Oran Viriyincy, distributed under a Creative Commons license. Hat tip to Japhet Koteen for the find.