The City Speaks, but to Whom?

A new and unfamiliar city can be difficult to navigate when different places have different kinds of street signs, unknown landmarks and directions in different languages. And even a city that is easy to navigate on foot can be hard to get around in a car, or vice versa. Now, ever more actors are demanding better navigation in the growing cities of the future, from airborne delivery drones to self-driving cars. The task of future urban designers is to accommodate both the automated and human inhabitants of cities and make sure neither is left behind.

This is not the first time that these two navigational priorities have clashed. In addition to their ideological differences, the two Cold War adversaries – the United States and the Soviet Union – were very different in how they organized the information they had about the physical shape of the world. The Soviets created incredibly detailed maps, while the American approach was a more high-tech, but a more familiar one as well: a system of satellites able to accurately pinpoint the location of an object called NAVSTAR, better known today as the Global Positioning System – GPS. The former was designed to be used by high-ranking military and government officials, focusing on human details like depth of ponds, width of footpaths and the kinds of weather the area could experience during different types of the year. The latter, conversely, was more focused on helping increasingly automated machines like airplanes find their way.

Today, it is mapping services like Google Maps that are at the forefront of the new navigation battles. Google itself became a successful company by adopting a machine-centered approach to searching the web compared to its rival Yahoo!, which instead sought to organize the Internet using human ‘librarians’. Google’s algorithms are far from perfect, but given the size of the modern web they could index web pages at a speed that Yahoo! couldn’t match, making it the clear winner in the search engine wars of the 2000s.

Ironically, in the battle between human and algorithm at the center of the city of the future, Google is increasingly turning to the methods of its defeated rival. While the bulk of the work of creating and improving their city maps is outsourced to algorithms that harvest satellite images, Google StreetView photos and even users’ location data, it is then error-checked by human operators to make sure that the computer correctly identified difficult-to-analyze features like one-way streets, unconventional crosswalks or oddly-angled street signs.

Google’s efforts to improve its maps place it at the epicenter of one of the major issues for cities going into the future: machines are not very good at navigating them. That means that in order to implement many of the up-and-coming technologies such as driverless cars or drone delivery systems, we need to radically redesign our cities to communicate information not just to people, but to computers as well. Some such technologies, such as traffic lights that communicate directly with cars, are already being prototyped. However, that in itself raises a new problem: machines like driverless cars see the world in a very different way than humans. That means increasingly fewer cues to help walking or cycling residents find their way in unfamiliar neighborhoods. The proliferation of GPS has already made conventional navigation more difficult, with many suburban communities entirely omitting physical street signs and homeowners passing on visible house numbers. Just like navigating the internet, we have already outsourced much of the work of finding physical locations to mechanical aides.

This has profound implications for the future of urban design. First is a degree of redundancy: humans can’t figure out their exact location from satellites, while drones can’t read street signs. That means that information needs to be delivered in both a human-readable and a machine-readable medium, with both being fully in sync, especially when important properties like street names or direction of travel are changed. This can increase both the difficulty and expense of urban planning. The distinction between human-readable and machine-readable navigation marks also has effects across socioeconomic boundaries. The difficulties that come with navigating suburbs without GPS systems already serve as a class-based filter for the residents of those communities. It is easy to imagine similar urban communities that are difficult or even impossible to access without a self-driving car, for example, creating a new form of urban segregation and perpetuating inequality.

Both of these issues are facets of the same question: what purpose do our cities serve and who are we really building them for? Are they efficient hubs of commerce or comfortable places to live? In order to be successful, our cities have to do both: deliver a good living atmosphere and serve as centers of economic growth. By making it easier for machines to identify and deal with the myriad of urban obstacles that we simply take for granted, we can make our cities cleaner and more efficient. However, it is important that we do not remove the human element from the equation completely, lest we become trapped in machines’ cities instead.

by Yaroslav Mikhaylov

Image Credit:

Cover Image: Nic McPhee, licensed under Creative Commons Attribution-ShareAlike 2.0 Generic

Image 1: JCT600 via their blog, licensed under Creative Commons Attribution-ShareAlike 2.0 Generic

Image 2: smoothgroover22, licensed under Creative Commons Attribution-ShareAlike 2.0 Generic

Image 3: jan buchholtz, licensed under Creative Commons Attribution-NonCommercial-NoDerivs 2.0 Generic