Population Geography

Does High-Rise Housing Contribute to Ultra-Low Fertility Rates?

The Antiplanner blogsite recently ran an interesting and controversial post arguing that South Korea’s extraordinarily low fertility rate is linked to its prevalence of high-rise housing. As the author put it:

South Korea’s high-rise housing and low birthrates are closely related. People don’t have children if they don’t have room for them. High rises are expensive to build so living space is at a premium. Birth rates are declining throughout the developed world, but they have declined the most in countries like South Korea, Russia, and China that have tried to house most of their people in high rises.

The post elicited pushback, with one commenter stating that she saw “not a shred of evidence other than his bald assertion that people in Korea have no room for kids.” Evidence is indeed necessary to support such a claim, but is it available? It is true that some other countries noted for their high-rise housing, most notably Brazil, have also experienced plummeting fertility. But in both Brazil and South Korea, low fertility is also characteristic of rural areas and small towns that are not dominated by high-rise housing, albeit not to the same degree as in large cities covered with apartment towers.

My immediate reaction to this article was to try to devise a geographical test, one that would allow direct comparisons of housing types and fertility rates. Unfortunately, I was not able to find a relevant data source in the time that I allotted myself for the task. The best information that I could find is a list of European countries by people living in detached and semi-detached housing. People not living in such dwellings can generally be assumed to live in apartment (or condominium) blocks, which can be low-rise, mid-rise, or high-rise. Although this would therefore be a poor test of the Antiplanner’s thesis, it nevertheless seemed worth pursuing.

As can be seen in the paired maps below, the correlation between multifamily housing and fertility levels in Europe is weak. It is true that most countries with extremely low fertility have little detached or semi-detached housing, including Greece, Italy, and Spain. By the same token, some countries that have abundant detached or semi-detached housing have relatively high fertility, such as Ireland. But note the exceptions. North Macedonia, for example, has extremely low fertility but a high percentage of people living in detached or semi-detached housing, whereas Estonia shows the opposite pattern.

Since the Antiplanner claims that high-rise housing generates low fertility primarily because of inadequate room for child rearing, a better measurement would be to compare TFR with average living-space per household. I have not, however, been able to find an adequate data set to assess this assertion. A Eurostat graph showing “average number of rooms per person 2021” (size unspecified), however, does not indicate a significant correlation. According to this graph, Malta has the most capacious housing in Europe, with 2.3 rooms per person, yet its TFR, 1.13, is one of the lowest in the world. The same source also indicates that ultra-low fertility Spain has much more spacious housing (2 rooms per person) than relatively high-fertility Romania (1.1 rooms per person).

Culturally informed views about the amount of room necessary to rear a child vary significantly from country to country. In general, the wealthier the society, the more space is considered necessary. Such calculations also vary with employment conditions. I have been told by several young couples that more room is necessary for child rearing than before COVID, as one bedroom must now be reserved for an office that can be devoted to at-home work through Zoom. That belief could be dismissed, however, as a mere rationalization for not having children.

The most interesting finding from the data on detached and semi-detached housing in Europe concerns the geographical differences between these two categories. As the second set of paired maps shows, a few countries that have relatively little detached housing have an abundance of semi-detached housing, particularly the United Kingdom and the Netherlands.

Does High-Rise Housing Contribute to Ultra-Low Fertility Rates? Read More »

Is Confucianism Responsible for South Korea’s Demographic Collapse? Or Could It Be Modernity Itself?

In the United States, fertility rates increasingly correlate with religiosity. Those who regularly attend religious services have more children than those who irregularly attend, who, in turn, have more than nonreligious people (see the graph below). Does this generalization hold for South Korea, a predominantly secular country with substantial Christian and Buddhist minorities? (A 2021 Gallup Korea poll found that 60 percent of South Koreans have no religion, with 16 percent following Mahayana Buddhism, 17 percent Protestant Christianity, 6 percent Roman Catholic Christianity, and 1 percent other religions.) Apparently, it does so only to a slight degree. According to a study published in Demographic Research in 2021, the Total Fertility Rate by faith in South Korea in 2015 was as follows: no religion, 1.13; Buddhist, 1.33;  Catholic, 1.16; Protestant, 1.28; and “other religion,” 1.20.

A number of scholars, however, have linked South Korea’s ultra-low fertility rate to Confucianism, a largely secular philosophical system with religious undertones. In the Joseon period (1393-1894), Confucianism was the dominant belief system of the Korean elite. Confucian ideas and practices still pervade South Korean society, probably to a greater extent than any other country. Intriguingly, other countries of Confucian heritage also have low (North Korea, Vietnam) or ultra-low (Japan, China, Taiwan) fertility levels (although that of Vietnam is just under replacement level and is currently holding steady). Japan and China are also, like South Korea, afflicted with high rates of withdrawal from marriage and the work-force by disaffected young people, a phenomenon known in China as the “lying flat” movement (tang ping).

Scholars who have posited a link between Confucianism and ultra-low fertility in South Korea have generally focused on women, highlighting the increasing numbers of whom are intentionally foregoing marriage and childbearing. Standard Confucianism is decidedly patriarchal, with wives placed in a subservient position to their husbands. Family solidarity is highly valued, with mothers expected to devote themselves to their children. As a result, pursuing a career is often deemed incompatible with childbearing and rearing. Faced with such a dilemma, increasing numbers of young Korean women are choosing career development over marriage and motherhood.

In an interesting article called “Ultralow Fertility in East Asia: Confucianism and Its Discontents,” Yen-hsin Alice Cheng argues that the East Asia has a unique fertility regime characterized by male-skewed sex ratios at birth (due to son preference), low rates of non-marital birth, rising prevalence of bridal pregnancy, and low rates of cohabitation. These attributes, she argues, are “closely linked to a patriarchal structure based on family lineage through sons, strong parental authority, and emphasis on women’s chastity (i.e. sanctions for premarital sex and ‘illegitimate’ births outside of marriage) and the belief that women are obliged to bear sons to continue the patrilineal bloodline” (p. 98-99). Faced with such expectations, she argues, many young women are simply opting out.

Although the connection between low fertility and Confucian patriarchy has been made by many others, Cheng also links it to Confucian-inspired “credentialism.” Here she focuses on the legacy of the highly prestigious imperial civil service examinations that selected elite bureaucrats based on their exam performances. This heritage has resulted, she argues, in a “low regard for vocational education and craftsmanship in Confucian societies,” with “academic success in the educational system considered a life goal that is of paramount importance …, with parents doing their best to make sure their children advance as far as possible academically” (p. 102). Today, academic success translates into coveted positions in South Korea’s world-class corporations and allows entry into prestigious professions. Such jobs, however, are limited, relegating even some of the most diligent students to non-prestigious jobs that are regarded as humiliating. Faced with such pressures, many young people prefer social withdrawal.

Scholarly attitudes toward Confucianism in the West have oscillated from condemnation to commendation, depending in part on economic and political conditions in East Asia. In the eighteenth century, when Qing China was the world’s most powerful country, Enlightenment philosophers celebrated the rationalism, secularism, and meritocracy of Confucianism, marveling at a society in which elite status was determined more by exam performance than by aristocratic birth and in which the military was subservient to civil society. Some writers even claim that Confucius was the “patron saint of the Enlightenment.” But as China declined in the nineteenth century while the West advanced, attitudes changed. It eventually came to be argued that the inherent conservatism of Confucianism, marked by undue submission to authority and rigidly hierarchical lines of power, prevented innovation, adaptation, and modernization in East Asia. But the mindset shifted again in the second half of the twentieth century as Confucian societies underwent extraordinarily rapid economic growth and modernization. It then came to be argued that Confucianism’s profound respect for education propelled economic development while its emphasis on family cohesion ensured social stability. But now the tables are again turning, with Confucian patriarchy and credentialism blamed for South Korea’s demographic collapse and the concomitant crisis of disaffected young people abandoning social expectations and dropping out.

None of these interpretations are either “correct” or “incorrect,” and all probably contain an element of truth. A belief system as comprehensive as Confucianism has many different aspects and pulls in different directions. It influences social structures but does not determine them, and thus provides partial explanations at best. A significant amount of evidence, moreover, suggests that today’s supposedly Confucian-generated social pathologies are not limited to East Asia. Ultra-low fertility, for example, is found elsewhere, including much of Europe. But here too historically patriarchal social structures seem to play a role, as Europe’s more gender equalitarian societies now have higher fertility levels than those with traditionally stronger gender roles; compare, for example, the TFR charts of Sweden and Italy posted below.

The most important issue is probably the extent to which the social withdrawal phenomenon is unique to South Korea and other countries of Confucian heritage. Similar although less extreme developments do seem to be occurring in the United States and Europe, as is noted in the Wikipedia article on South Korea’s Sampo (“Giving Up”) Generation. Rates of depression, anxiety, and social isolation among young people in the U.S., moreover, are also surging. Although many explanations have been offered and debated, this phenomenon is complex and pervasive, leading some to suspect that modernity itself is the ultimate culprit. By this interpretation, modern societies are much better at generating goods and technologies than meaning and real social connections, yet meaning and real social connections remain essential for psychological health. Jon Haidt has been arguing for some time that social media, particularly Instagram and TikTok, are responsible for much of the mental-health crisis among American girls; he now argues that the much more gradual psychological decline found among boys began decades earlier with the arrival of computerized gaming, which pulled them out of real-world encounters and into simulated environments. In some regards, South Korea is the most technophilic and modernistic country in the world, and, by this reasoning, it would be expected to be at the leading edge of a modernity-generated social crisis.

Is Confucianism Responsible for South Korea’s Demographic Collapse? Or Could It Be Modernity Itself? Read More »

“Hell Joseon”:  The Paradoxes of South Korean Development

The paradoxes of South Korean development are profound indeed. On the one hand, the country’s rise from crushing poverty to glittering prosperity over the past 60 years has been nothing less than astounding. In 1960, South Korea was one of the poorest countries in the world, with a per capita gross national income of a miserable $120; today it is one of the wealthiest, with a median household income above those of France, the United Kingdom, and Japan. It has triumphed in the cultural sphere as well, with its music, films, and television shows gaining a huge global audience. Yet for all this success, there is a widespread mood of despondency among many South Koreans, signaled, some argue, by their unwillingness to reproduce. The country’s Total Fertility Rate (TFR) has recently plummeted to 0.7 children per woman, by far the lowest rate in the world. If this trend persists, the South Korean nation will soon begin to rapidly contract. Although mass migration could slow the decline, it faces substantial opposition on cultural grounds. It thus seems to many that South Korea faces a singularly bleak future of national decline.

One can argue, however, that that there is nothing particularly paradoxical about South Korea’s situation, given that all other highly developed countries, bar Israel, have below-replacement levels of fertility. But the broader paradox remains: can seemingly successful socio-economic development really be considered successful if it proves to be demographically unsustainable, dependent on continuing migration streams from less-developed countries whose own birthrates are declining, and which are increasingly opposed by populist-inclining, anti-immigration electorates?

But as many writers have argued, concerns about the current birth dearth may be no more firmly grounded than the earlier fears about a “population explosion” that would supposedly generate mass starvation across the world by the late twentieth century. Indefinitely extrapolating almost any trend can indicate impending calamity, but few persist long enough to reach that point. South Korea’s fertility rate could certainly rebound. And, as many argue, if one considers the fact that South Korea is one of the world’s most densely settled countries, population reduction should not necessarily be considered a negative outcome. Some would also contend that by foregoing childbearing, South Korea’s young adults are better able to enjoy the fruits of their country’s extraordinary economic ascent. Despite its paucity of children, South Korea can therefore still be regarded as a resounding success. As the noted economist and public intellectual Tyler Cowen has recently quipped, “South Korea in 1960 was as poor as Central Africa. Today, it’s a very nice, pretty wonderful country.”

The problem with such thinking, however, is that large proportion of young South Koreans strongly disagree, regarding their country as anything but “nice [and] pretty wonderful.” Since 2016, many of them have been denigrating it as “Hell Joseon” (“Joseon” being the name of early modern Korea, a poor, class-bound, and rigidly hierarchical society.) They have concluded that they have no worthwhile future to anticipate regardless of how hard they work. According to a Wikipedia article, “by 2019, the phrase [Hell Joseon] had been superseded by a new term, ‘Tal-Jo,’ a portmanteau comprising ‘leave’ and ‘Joseon,’ which might be best be translated as ‘Escape Hell.’” To do so, many are simply opting out, giving up on marriage, family, children, and more. Some evidence indicates that this trend is intensifying, propelled by the COVID pandemic but remaining firmly ensconced in its uncertain aftermath. According to one interpretation, many discouraged young adults are now abandoning all hope (see thetable below).

Those who have “given up,” however, represent a small minority of South Korea’s youth, with many more soldiering on through their country’s grueling educational and career-advancement systems. But the problems that the disaffected young have identified afflict the entire country and partially underlie its fertility collapse. These problems, it is essential to note, are not unique to South Korea. They are also found in Japan, China, and Taiwan, and are thus characteristic of East Asia as a whole. But they are more extreme in South Korea, where they have apparently generated an immediate demographic threat.

Ironically, the same trait that allowed South Korea’s breathtaking rise is now contributing to its pending decline: extraordinarily hard work from childhood until retirement. For a compelling fictionalized account of the grueling nature of South Korea’s educational system, I recommend the “Pied Piper” episode of the acclaimed television show Extraordinary Attorney Woo (season 1, episode 9). According to one poll, a lower percentage of South Korean children reported being “happy at school” than those of any other country. Exhausting schedules are also typical of the workplace. As reported in an insightful Washington Post article:

In this working culture, 14-hour days are the norm. In 2012, a left-leaning presidential candidate ran on the slogan: “A life with evenings.” Most frustrating of all, many young people say, is that their parents, who worked long hours to build the “Korean dream,” think the answer is just to put in more effort.

It is not just the long hours that that dishearten young adults, but also the conviction that they will not be able to succeed no matter how hard they work, feeling that the system is rigged against them. Although South Korea purports to be a meritocratic country in which anyone can get ahead by dint of diligence and intelligence, inherited class position, family and school connections, and even place of birth still matter a great deal. But for most parents, the belief in, and the desire for, upward class mobility for their children remains paramount, leading to huge investments in after-school schools and other forms of educational enrichment. The required expenditures are so large that having a second child often becomes financially impossible. This combination of financially stressed and educationally obsessed parents and emotionally stressed and deeply disillusioned children contributes to a yawning generational gap, undermining the cohesion of South Korean society.

As is the case in many other wealthy countries, the high cost of housing is another factor in South Korea’s declining birth rate. Many young couples cannot afford an apartment, let alone a house, large enough to accommodate more than one child. The lack of affordable housing in a country that is beginning to experience population decline might seem surprising, but it has been propelled several factors, including the continuing aggregation of people in a few major cities. Roughly half of the nation now lives in the greater Seoul metropolitan area. The country’s rural population, moreover, continues to shrink, although it is now so small (4.15 percent) that the pace of decline has slackened. Governmental policy, however, is probably more important – and far more perverse. As reported in a 2021 article in Foreign Policy:

The average price of an apartment in Seoul has doubled in the past five years under the current government’s misguided policies on mortgage rules and tax penalties. Four years ago, it would have taken 11 years’ worth of South Korea’s median annual household income to buy an apartment in Seoul. Now, it costs more than 18 years’ worth of income. Rents have shot up, leaving young people with limited savings and without a shelter.

Some observers have linked South Korea’s fertility implosion to its Confucian heritage, which will be the focus of the next GeoCurrents post.

“Hell Joseon”:  The Paradoxes of South Korean Development Read More »

South Korea’s Fertility Collapse

Recent reports indicate that South Korea’s Total Fertility (TFR) rate has dropped to 0.7 children per woman, a staggeringly low figure. Although below-replacement fertility is now found in all high-income countries except Israel, all others have significantly higher birth rates than South Korea. According to the United Nations Population Fund (2023), no other sovereign state has a TFR below 1.0. Other sizable countries with very low rates, such as Italy, Spain, Ukraine, China, and Japan, report TFR numbers of 1.2 and 1.3.

The fertility collapse in South Korea is generating a lot of attention, with many observers warning of a pending disaster. Ross Douthat, writing in the New York Times, claims that:

There will be a choice between accepting steep economic decline as the age pyramid rapidly inverts and trying to welcome immigrants on a scale far beyond the numbers that are already destabilizing Western Europe. There will be inevitable abandonment of the elderly, vast ghost towns and ruined high rises and emigration by young people who see no future as custodians of a retirement community. And at some point there will quite possibly be an invasion from North Korea (current fertility rate: 1.8), if its southern neighbor struggles to keep a capable army in the field.

Such warnings may be overblown. The possibility of a demographic-led invasion, moreover, is  complicated by North Korea’s own low and declining fertility, which reportedly brought Kim Jong Un to tears earlier this week. It must also be mentioned that not everyone regards demographic collapse as a negative phenomenon. Many environmentalists welcome it, viewing the Earth as grotesquely overpopulated as it is.

The South Korean government, however, is deeply very concerned about its birth dearth. It now offers significant subsidies for childbearing, including $10,500 in cash. At least one city has set up its own match-making services. According to a recent NPR story, “South Korea has moved aggressively to stem the decline in births, and its actions provide a model for steps other governments can take to address the issue.” Such framing, however, is little short of bizarre; as South Korea’s demographic initiatives are clearly failing, they can hardly be regarded as a “model.” Other countries, most notably Czechia, have significantly increased their fertility rates and thus provide much better models. But it remains doubtful that South Korea could successfully follow their lead.

The next GeoCurrents post will examine some of the explanations offered for South Korea’s fertility collapse. For today, we will simply look at birth-rate variation across the country, looking for geographical patterns that might help illuminate the issue.

We begin with a simple map of South Korean TFR by province and other first-order administrative divisions. As can be seen, fertility rates are extremely low across the country. The only area with a TFR above 1.0 is Sejong (officially, Sejong Special Self-Governing City). Sejong was established in 2007 as a planned and spacious city that will eventually replace Seoul as South Korea’s capital. Most governmental ministries have already relocated there. As the Wikipedia article on the city notes, “Sejong uses its new development to market itself as an alternative to Seoul, offering luxury living at a fraction of the cost.” It is not surprising that uncrowded and relatively inexpensive Sejong would have a much higher fertility rate than Seoul – 1.12 as opposed to 0.59 –  as the density and costliness of Seoul are often offered as explanations for its extraordinarily low birthrate.

Otherwise, it is difficult to find any specific factors that might contribute to South Korea’s fertility variation from province to province, which are not, in any event, particularly pronounced. Per capita GDP, for example, does not appear to be significant, as can be seen in the paired map posted below.

Province-level mapping, however, offers a crude and cloudy window into population dynamics. Unfortunately, the only detailed fertility map of South Korea that I have been able to find dates to 2010, when its TFR was a 1.2. As can be seen, several parts of the country at the time had fertility rates over 1.8. Comparting this map to one of population density reveals some interesting but not unexpected patterns. To clarify one of them, I have outlined the areas with relatively high fertility (over 1.8) at the time on a dot-map of population density. As can be seen, all these higher-fertility zones were characterized by low or moderately low population density, at least by South Korean standards. Some areas of very low population density, however, also reported extremely low birth rates. Also unsurprisingly, major cities in 2010 were also characterized by extremely low fertility. An interesting partial exception, however, was the extraordinarily economically productive city of Ulsan in the southeast. From 2010 to 2015, Ulsan’s TFR rose from 1.37 to 1.49; since then, however, it has plummeted to 0.85 (2022).

Although off-topic, the source of Ulsan’s economic productivity is heavy industry. As noted in the Wikipedia article on the city:

Ulsan is the industrial powerhouse of South Korea, forming the heart of the Ulsan Industrial District. It has the world’s largest automobile assembly plant, operated by the Hyundai Motor company the world’s largest shipyard, operated by Hyundai Heavy Industries and the world’s  third largest oil refinery, owned by SK energy. In 2020, Ulsan had a GDP per capita  of $65,352, the highest of any region in South Korea.

South Korea’s Fertility Collapse Read More »

Two Additional Maps on Urban Population Change in the United States

In October 2023, GeoCurrents ran several posts on the historical and recent population growth of major American cities. These posts were envisioned at the time as the beginning of a large project on mapping the expansion of urbanism in the United States. That project, however, has been put on hold, perhaps indefinitely. But there are two remaining maps from this endeavor that are worth sharing.

The first is a schematic map that takes the sixteen largest cities in the U.S. in 1950 and shows their relative population in that year 2020 and in 2020. As can be seen, 12 of these cities experienced population loss in this period, several to a significant degree. Detroit, Cleveland, Saint Louis, Pittsburgh, and Buffalo have greatly diminished. Other declining cities, especially Boston, Milwaukee, and Washington, saw much smaller losses.

Only four of 1950’s largest cities gained population over the next 70 years. Two made marginal gains (New York and San Francisco), one expanded significantly (Los Angeles), and one boomed (Houston). Significantly, all four lost population from 2020 to 2022, although in Houston the decline was insignificant (0.07 percent).

The second schematic map turns from city population to metropolitan area population, which in many ways gives a better sense of U.S. urban dynamics, given the country’s extensive suburbanization. Here we see relative population size, again depicted by the area of each polygon, and population growth from 2010 to 2020, coded by color. As can been seen, all the top 60 metropolitan areas in the U.S. gained residents during this period, but they did so at very different rates. As would be expected, sunbelt metro areas saw the fastest growth and rustbelt ones the slowest. Only a few metro areas in the northern half of the country experienced major growth in the period, with Seattle, Minneapolis, and Omaha standing out. In the South, the relatively slow growth of New Orleans, Birmingham, Memphis, and Virginia Beach stands out, as does the rapid population expansion of the Austin, Nashville, and Raleigh metro areas.  

Two Additional Maps on Urban Population Change in the United States Read More »

Small But Densely Populated American Cities & the Transformation of Cudahy, CA

The list of the most densely populated incorporated cities in the United States has some interesting features. The top four entries are all small cities (less than 1.5 mi sq; fewer than 70,000 inhabitants) located just to the west of Manhattan in Hudson County, New Jersey. Three of the top 11 – Kaser, New Square, and Kiryas Joel – are relatively new towns in the New York metropolitan area that are entirely or primarily inhabited by Hasidic Jews. All three have high fertility rates and low levels of per capita income. According to Wikipedia, “Kiryas Joel has the highest poverty rate in the nation” while New Square is “the poorest town (measured by median income) in New York, and the eighth poorest in the United States.”

One surprising revelation in the city-density list is the large number of thickly populated cities that were originally established as low-density suburbs of Los Angeles. Of the 140 U.S. cities with more than 10,000 people per square mile, 28 are in the Los Angeles region. Although still conventionally imagined as a low-density, suburban environment, the L.A. region has been densifying for decades. The sprawling city of Los Angeles itself, covering some 469 mi sq, is now moderately dense by U.S. standards. As the density map of southern Los Angeles County posted below shows, central L.A. is now heavily inhabited, with many census tracts reporting more than 30,000 people per mi sq. Quite a few outlying tracts also post high figures. Many of these areas do not appear at first glance to be densely populated, as they are dominated by low-rise buildings and include many detached, single-family houses. But the number of persons living in each dwelling unit can be high, particularly in areas with large numbers of recent migrants.

Several of small, densely populated cities in the Los Angeles metropolitan area in the northwestern quadrant of a cluster of municipalities known as the “Gateway Cities.” I have enclosed the northern portion of this “Gateway” area on maps posted above and below, excluding the relatively large city of Long Beach. The crowded little cities in this region are relatively poor and have large immigrant populations. In 2019, Business Insider placed Huntington Park in the lowest position in California on its “misery index” and in the tenth lowest nationally. The Wikipedia article on Maywood estimates that one-third of [its] residents live in the U.S. without documentation.” Maywood is also notable for being “the first municipality in California to outsource all of its city services, dismantling its police department, laying off all city employees except for the city manager, city attorney and elected officials, and contracting with outside agencies for the provision of all municipal services.”

The evolution of tiny but densely packed Cudahy, with almost 23,000 residents living in 1.18 mi sq, is particularly interesting. Cudahy was originally designed as a semi-rural garden city. Its founder and namesake, the wealthy meat-packing entrepreneur Michael Cudahy, purchased a large ranch in 1908, which he subdivided and sold off in one-acre lots. As explained in the Wikipedia article on the city:

These “Cudahy lots” were notable for their size—in most cases, 50 to 100 feet (15 to 30 m) in width and 600 to 800 feet (183 to 244 m) in depth, at least equivalent to a city block in most American towns. Such parcels, often referred to as “railroad lots,” were intended to allow the new town’s residents to keep a large vegetable garden, a grove of fruit trees (usually citrus), and a chicken coop or horse stable.

Although gardens, orchards, and farm animals are long gone, the old “Cudahy lots” may still be visible in satellite images (see the image below; I was not, however, able to find a map of the original city lots). At any rate, Cudahy gradually morphed into a crowded industrial town, giving it a legacy of environmental contamination. As noted by the Wikipedia article cited above:

On January 14, 2020, delta Airlines flight 89 dumped jet fuel  Cudahy, while making an emergency landing at Los Angeles International airport. Park Avenue Elementary School suffered the brunt of this dumping. This incident sparked outrage because of the city’s previous history of environmental damage, including the construction of the same school on top of an old dump site that contained contaminated soil with toxic sludge, and pollution from the Exide battery plant.

As a final note, it is intriguing that the two main clusters of small, high-density cities in the United States are located immediately adjacent to the country’s two largest cities, New York and Los Angeles. Populous though they are, these two cities have markedly different built environments and settlement histories. New York is well known for its high population density, but Los Angeles is more commonly regarded as a low-density city anchoring an even lower-density metropolitan area. That vision is longer justifiable.

Small But Densely Populated American Cities & the Transformation of Cudahy, CA Read More »

Mapping the Development of the Urban Framework of the United States, 1790-1830

I am currently working on an online historical atlas of the development of the urban framework of the United States. The maps and commentaries that will constitute this atlas will be posted gradually over the next few weeks or months, interspersed with regular GeoCurrents posts. The first of these installments, showing the situation in 1840 and outlining the “Philadelphia problem,” appeared on October 13, 2023. Today’s post examines the development of the network of cities in the United States from 1790 to 1830. The population figures in today’s post, like that of October 13, are derived from a Wikipedia article called “List of Most Populous Cities in the United States by Decade.” In subsequent posts, covering the period after 1840, a more comprehensive data source will be used.

The United States had few cities of any size in 1790. New York City tops the conventional list, with 33,131 inhabitants, and Philadelphia comes in second, with 28,522. But Philadelphia at the time was limited to what is now called Center City. If one includes what were then the separate cities of Southwark and Northern Liberties District, which were annexed in 1854, Philadelphia ranks first, with a population of 44,096, and is mapped accordingly.  As can be seen on the map posted below, the country’s main cities – or towns, in one prefers – of the time were all ports, located on the coast or along estuaries. Except for Charleston, South Carolina, all of them were in the greater northeast. The prominence of New England on this map, with more than half of the cities depicted, will not persist into the 1800s as the urban center of gravity shifts south into the Mid-Atlantic states.

The largest cities on the 1790 list significantly expanded from 1790 to 1800, with New York growing from 33,131 to 60,514, Baltimore from 13,503 to 26,514, and Boston from 18,320 to 24,937. Philadelphia, in the larger sense, still vies with New York for top position. Norfolk, Virginia appears on this map, but the year 1800 marks its only inclusion in the top-ten list.

The rapid expansion of the country’s largest cities is a persistent feature of these maps. By 1810, the population of New York City approached 100,000. By this time, New York was clearly the country’s largest city, a position that it will retain and amplify in the following decades. The 1810 map includes the first truly inland city, Albany, New York. Located on the Hudson River, Albany’s appearance reflects the growing importance of trade with the interior. More important is the inclusion of New Orleans on the southern Mississippi, which became part of the United States with the Louisiana Purchase of 1803.

In 1820, Albany drops of the map, replaced by Washington DC, which had 13,247 inhabitants in that year. But as the nation’s capital experienced relatively slow growth after this period, it falls off the top-ten list in 1830 and does not reappear until 1950. In the early nineteenth century, Washington was derisively called “the city of magnificent distances” due to its small number of residents living in an urban framework designed for a larger population. In 1842, Charles Dickens claimed that “Its streets begin in nothing and lead nowhere.” The fact that capital of the United States was such a small city reflects the limited extent of the federal government before the Civil War. As its constituent states were arguably more important than the country itself, the common locution at the time was “The United States are…,” rather than “the United States is… .”

The major changes on the map of 1830 reflect the opening of the Erie Canal (the dotted blue line on the map) in 1825. The Erie Canal facilitated the emergence of an extensive water-based transportation network, linking the Hudson River to the Great Lakes, and, by extension, to the Ohio and Mississippi rivers. Not surprisingly, Albany reappears on the 1930 map. More important, Cincinnati emerges as the first significant Midwestern city. Cincinnati will remain in the top-ten list until 1910. Today, with a population of 309,51, it ranks in the 64th position, surpassed by a few suburbs of little historical significance. In the early and mid-1800s, however, Cincinnati was a major and rapidly growing city, due in part to its role in butchering and processing hogs for the national market. This industry was so important that the city was deemed “Porkopolis.” As is explained in a 2016 Cincinnati Magazine article:

“Porkopolis” is one of the names by which Cincinnati is known, and its origin is explained in the following manner: About 1825 George W. Jones, president of the United States branch-bank, and known as “Bank Jones,” was very enthusiastic about the fact that 25,000 to 30,000 hogs were being killed in this city every year; and in his letters to the bank’s Liverpool correspondent he never failed to mention the fact, and express his hope of Cincinnati’s future greatness as a provision-market. The correspondent, after receiving a number of these letters, had a unique pair of model hogs made of papier mache, and sent them to George W. Jones as the worthy representative of ‘Porkopolis.’”

… Frances “Fanny” Trollope is infamous for publishing a scathing indictment of Cincinnati in her 1832 book “Domestic Manners of the Americans”. A great deal of her bile is directed at our pigs:

“If I determined upon a walk up Main-street, the chances were five hundred to one against my reaching the shady side without brushing by a snout fresh dipping from the kennel; when we had screwed our courage to the enterprise of mounting a certain noble-looking sugar-loaf hill, that promised pure air and a fine view, we found the brook we had to cross, at its foot, red with the stream from a pig slaughterhouse while our noses, instead of meeting ‘the thyme that loves the green hill’s breast,’ were greeted by odours that I will not describe, and which I heartily hope my readers cannot imagine.”

It is not coincidental that the Procter & Gamble Company is headquartered in Cincinnati. As explained in Encyclopedia Britannica:

The company was formed in 1837 when William Procter, a British candlemaker, and James Gamble, an Irish soapmaker, merged their businesses in Cincinnati. The chief ingredient for both products was animal fat, which was readily available in the hog-butchering centre of Cincinnati. The company supplied soap and candles to the Union Army during the American Civil War and sold even more of these products to the public when the war was over.

Although candles are now usually made of wax, historically they were mostly made from animal fat. In earlier times, only prosperous people could afford wax candles.

Mapping the Development of the Urban Framework of the United States, 1790-1830 Read More »

Mapping the Population of U.S. Cities in 1840 – and the Philadelphia Problem

I am currently working on a large set of GeoCurrents maps that will depict the current and historical demographic patterns of U.S. cities and metropolitan areas. Several problems, however, have arisen in data selection and visualization. Most troublesome is the gradual amalgamation of separate municipalities into single cities.

Consider, for example, a map showing the locations and populations of the six largest U.S. cities in 1840 (below). It might be surprising that Philadelphia, considering its historical importance, appears as only the fourth largest, surpassed in population by Baltimore and New Orleans. But this depiction is misleading.  As it turns out, 5 of the 37 American cities with more than 10,000 inhabitants in 1840 are now mere neighborhoods of Philadelphia. New York City, as it is currently conceptualized and legally defined, was also larger than it appears on the map.  In 1840, it did not include Brooklyn, which was then the country’s seventh largest city. Boston was larger as well, as it did not then include Charlestown (see the table below).

To address this problem, I have revised the map by amalgamating what were then separate municipalities with the nearby cities that later annexed them. I did so, however, only for cities with more than 10,000 inhabitants. If smaller cities were subjected to the same treatment, the map might have to be revised again. But regardless of such difficulties, it can be clearly seen that Philadelphia was the country’s second city in 1840, with a population more than twice that of Baltimore.

The final map includes all cities (that currently exist as cities) that had more than 10,000 inhabitants in 1840. As can be seen, almost all were linked to transportation networks, serving as ports on seacoasts, estuaries, or rivers. Several are located on the Erie Canal (shown as a dotted blue line), again illustrating the importance of waterways in the pre-railroad era, which quickly coming to an end. Lowell, in northeastern Massachusetts (mapped in a light shade of red), is an interesting exception, as it emerged as a planned industrial city focused on textiles. Located on the rapids of the Merrimack River, which provided power, Lowell is often regarded as the “cradle of the American industrial revolution.”

Mapping the Population of U.S. Cities in 1840 – and the Philadelphia Problem Read More »

Non-Metropolitan Patterns of Population Change in the United States, 2020-2022

Earlier this year Axios published a revealing map of population change in all counties in the United States from 2020 through 2022. This map, unlike the ones that I made and posted earlier this week, allows one to assess population change in non-metropolitan as well as metropolitan areas. As can be easily seen for the United States as a whole, rapid growth was concentrated in three areas: western and central Florida; the suburban and exurban fringes ringing the largest cities of Texas; and a western belt encompassing Utah, Idaho, and western Montana. Other interesting patterns can also be discerned. To clarify them, the rest of this post will examine state-and regional-level map-details extracted from this national map.  

Let us begin with Appalachia. Several recent articles (for example, this one by Aaron M. Renn) have noted that southern Appalachia is doing much better than northern Appalachia on almost every metric. It is therefore no surprise that most counties in southern Appalachia grew during this period while many if not most in the north shrank (that is, if “north” is defined as all areas north of the northern borders of North Carolina and Tennessee).

Appalachia is often placed in the same cultural and socio-economic category as the Ozark Plateau, located mostly in southern Missouri and northern Arkansas. Both areas are characterized by steep terrain, heavy forests, and a backwoods folk culture that is both widely denigrated and romanticized. In terms of recent population change, the Ozark Plateau clearly groups with southern Appalachia. But as can be seen on the paired maps below, most counties in this region lost population, or remained relatively static, during the 2010 to 2020 period. The only substantial growth then was in its two metropolitan areas, Springfield in southwestern Missouri and Fayetteville-Springdale-Rogers (home of Walmart and several other major corporations) in northwestern Arkansas. When the COVID pandemic hit, however, people began to relocate to the region’s rural counties. I was intrigued by the very rapid growth shown for Wright County. A quick Internet search, however, returned almost nothing other than a single highly misleading post from World Population Review, which claimed that the county’s population dropped during this period. But as the table and graph posted below indicate, this information was improperly extrapolated from a tiny snippet of information from an earlier period. I find it amusing that this reputable website claims that Wright County lost exactly 63 people every single year between 2011 and 2023! Such are the dangers of automated demographic interpretation.

Recent population growth in the Ozark Plateau is reflected in the expansion experienced in other lightly populated, scenic parts of the country. Most of Maine, the northern lower peninsula of Michigan, and northern Wisconsin also saw rural population growth in this period. An interesting place to examine this phenomenon is in the Dakotas. As the maps posted below show, most counties in far western South Dakota saw major population gains from 2020 to 2022 whereas most of those of western North Dakota saw significant declines. This pattern is easily explained. Western North Dakota experienced massive growth from 2010 to 2020 due to the oil boom in the Bakken Formation. That boom came and went (although it may return), and as a result the region’s population dropped sharply after the 2020 census. Western South Dakota, in contrast, contains the Black Hills, a scenic region with high amenity values. It is therefore no surprised that it saw a boom during the COVID period. It is important to note, however, that many counties in the western Dakotas have so few people that the gain or loss of a small number can make a dramatic difference on this map.

Differences between states are also apparent on the national COVID-era population-change map. Consider, for example, the neighboring states of Illinois and Indiana. Although Indiana and Illinois are politically very distinct, their non-metropolitan counties are quite similar. But recent population change at the county level differs greatly across the state border. Only five counties in Indiana had more than a one-percent population loss during this period, whereas only three counties in Illinois had more than a one-percent gain. The financial woes of Illinois are probably a significant factor here.

Idaho and western & south-central Montana show stark difference between the 2010-2020 and the 2020-2022 population-change maps. In the earlier period, quite a few primarily rural counties lost population. In the latter, only tiny Wheatland County, Montana (population 2,069 in 2020) lost more than one percent of its residents. From 2020 to 2022, many counties in this region, both metropolitan and rural, saw population gains of more than five percent.

California makes an interesting contrast with Idaho and Montana. Population loss from 2020 to 2022 was concentrated in the affluent coastal region, with San Francisco County exhibiting a drop of 7 percent, the largest in the country. But quite a few low-population, peripheral counties also experienced big drops, with Lassen declining by more than five percent. Intriguingly, some of these scenic counties with high outdoor-amenity values had experienced demographic booms in the final decades of the twentieth century. But as can be seen in the tables posted with the map below, this growth had essentially come to an end by 2010. Both Tuolumne and Mono counties, adjacent to Yosemite National Park, lost more than one percent of their population between 2020 and 2022. Evidently, state boundaries matter considerably in relocation decisions, and California is no longer a very attractive state.

 

 

 

 

Non-Metropolitan Patterns of Population Change in the United States, 2020-2022 Read More »

Striking Patterns of Population Change in U.S. Metropolitan Areas, 2020-2022

The 2020 to 2022 COVID period saw major population changes in the metropolitan areas of the United States, with some experiencing rapid gains and others rapid losses. Wildwood-The Villages, Florida, for example, saw a staggering 11.75 percent population increase, whereas Lake Charles, Louisiana witnessed a sobering decline of 6.01 percent. Mapping these changes reveals some interesting patterns.

The first map, showing population change in major metropolitan areas (defined here as those with more than 1.5 million people in 2002) exhibits clear regional differences. A stark north/south divide is evident in the region east of the Mississippi River. Here, every major metro area in the South saw population gains, some significant. So too did three out four in the lower Midwest (Columbus, OH, Cincinnati, OH, and Indianapolis, IN), although by smaller margins. By contrast, every major metropolitan area in the Northeast and upper Midwest lost population. In the western two-thirds of the country, population declines were restricted to the Pacific Coastal region. Here every major metropolitan area except Seattle saw a decline. Texas, in contrast, is notable for its rapid metropolitan expansion, with Dallas, Houston, Austin, and San Antonio all registering major gains in this period.

Somewhat different patterns are seen on the map of secondary metropolitan areas, defined here as those with populations between 700,000 and 1.5 million in 2022. As can be seen, fewer of these smaller metro areas lost population, indicating a shift from larger to smaller cities. Intriguingly, most of those that did decline are in or near the Mississippi River and the eastern Great Lakes, the main transportation corridor of the central part of the U.S. before the coming of railroads. New Orleans (official, the New Orleans–Metairie metropolitan statistical area) saw a drop of over 3.5 percent. I was surprised to see that New Orleans is no longer populous enough to qualify for the higher categories on this map, as its population has apparently dropped below one million. A major statistical discrepancy, however, complicates this analysis. According to the Wikipedia table that I used to make this map, New Orleans–Metairie had a population of only 972,913 in 2022, having declined from 1,007,275 in 2020. The Wikipedia article on the New Orleans–Metairie metro area, however, gives it a population of 1,271,845 in 2020. But no matter how one looks at it, New Orleans has hemorrhaged population, with the city itself dropping from 627,525 residents in 1960 to 383,997 in 2020.

The secondary metro areas that saw population growth in this period also exhibit some interesting patterns. Those in the Atlantic Northeast all saw minor population gains, presumably due to people fleeing the region’s larger and more expensive major metro areas. Much more rapid expansion, however, was experienced in the secondary metro areas of the southeast, particularly in Florida and the Carolinas. Secondary metro areas in the interior West also saw substantial growth.

Even more distinct patterns are visible on the map showing the fastest growing and fastest shrinking metro areas of all sizes during this period. (Many official metropolitan areas, it is important to note, are not large; Eagle Pass, TX, for example, has fewer than 60,000 inhabitants.) As can easily be seen, most of the fastest growing metro areas are in the southeastern coastal region, stretching from the Gulf Coast of Alabama through the Atlantic Coast of the Carolinas. Florida really stands out on this map. Several smaller metro areas in the non-coastal West also saw extremely rapid growth. St. George UT, for example, went from 180,279 to 197,680 inhabitants, a gain of almost 10 percent. After having witnessed the boomtown atmosphere of Bozeman MT, which does not even qualify for this map with a growth rate of just under 5%, I have a difficult time understanding how the infrastructure of Saint George could keep up with such rapid population expansion.

In contrast, three states stand out for the rapid population decline of many of their metropolitan areas: California, Louisiana, and West Virginia (metro area #16 on this map is Weirton–Steubenville, located in both West Virginia and Ohio). Although metropolitan growth from 2020 to 2022 was concentrated in Republican-voting states, Louisiana and West Virginia form clear exceptions.

The final map shows population loss-and-gain patterns in California’s metropolitan areas during the same 2020-2022 period. Here again the pattern is clear: all coastal metro areas,  which have equable climates but are very expensive, lost population, whereas most less-expensive metro areas in the Central Valley, a region noted for its scorching summers, gained population, as did the similarly toasty San Bernardino-Riverside metro area in Southern California, the so-called Inland Empire. The college town of Chico in Butte County in the northern Central Valley (or Sacramento Valley) however, saw a significant population drop.

Tomorrow’s post will examine the geography of population change in this period in rural counties.

Striking Patterns of Population Change in U.S. Metropolitan Areas, 2020-2022 Read More »

Maps and Graphs to Help Explain Italy’s Turn to Rightwing Populism

Rightwing populist parties have gained support over much of Europe over the past decade. Italy, however, is the first western European country to see a rightwing coalition led by a populist party come to power. The success of Giorgia Meloni’s Brother of Italy is partly explicable on the basis of Italy’s extremely low fertility rate in combination with its highly negative attitudes toward immigration, as can be seen in the map and charts posted below. With few children being born and immigrants generally unwelcome and no longer staying in large numbers, Italy faces an impending financial/demographic crisis. Unless something changes, future retirees will no longer be easily supported. Meloni’s pro-natalist plans, which call for substantial subsidies for child-bearing couples, thus proved attractive to many voters. Widespread antipathy to immigrants also helps explain the appeal of Meloni’s majoritarian identity politics, focused on nationalistic sentiments.

Why the Italian population is so averse to immigrants is an open question. The country’s foreign-born population is not high by western European standards. It is significant, however, that Italy does not have a long history of receiving immigrants; for most of its time as a nation-state, it has been noted instead for sending out emigrants.

Italy’s economic malaise is another important factor in its swing to the right. In the late twentieth century, the Italian economy was in good shape. In the Il Sorpasso phenomenon of 1987, Italy’s GDP overcame that of the United Kingdom, making it the sixth largest economy in the world. Today Italy’s GDP stands at 2,058,330 (US$ million) whereas the UK stands at 3,376,000 (US$ million). Italy has experienced pronounced economic decline over the past dozen years, and most of its regions suffer from high unemployment. Considering as well Italy’s chaotic political system, it is perhaps not surprising that its voters have turned against their country’s political establishment. Such dissatisfaction also helps explain the recent rise of its left-populist Five Star Movement. But Five Star saw a massive decline in support in the 2022 election. Perhaps its suspicions about economic growth were a factor here.

Maps and Graphs to Help Explain Italy’s Turn to Rightwing Populism Read More »

Urbanization, Economic Productivity, and the Industrial Revolution

Levels of urbanization and levels of economic development roughly correlate. As can be seen on the paired maps, countries with very low levels of urbanization tend to have low levels of economic productivity (as measured by per capita GDP in Purchasing Power Parity). Burundi, for example, has the world’s second lowest urbanization rate (13.7 in 2020) and the lowest level of per capita GDP ($856 in 2022). Conversely, Singapore is completely urbanized and has the world’s second highest level of per capita GDP ($98,526 in 2022). The linkage is strong enough that urbanization is sometimes used as a proxy for economic development, especially for earlier time periods. Consider, for example, this passage from a recent study published by the Hoover Institution at Stanford University:

We find that a vector of exogenous factors that were binding constraints on food production, transport, and storage within the densely populated nuclei from which nation states later emerged account for 63 percent of the cross-country variance in per capita GDP today. Importantly, this vector accounts for progressively less of the variance in economic development (as measured by urbanization ratios) going back in time. [emphasis added]

 

This maneuver is understandable, but its validity is questionable. Historical urbanization rates are difficult to determine, and the figures produced are often controversial. Even today, measuring urbanization is often tricky, due mainly to variations in the population-size and population-density thresholds for urban standing. More important, the correlation between urbanization and economic development is not particularly strong. Some primarily rural countries have moderately high levels of developmental, while some primarily urban countries have low levels. One finds such deviations at both the top and bottom of the urbanization spectrum. Germany, for example, is more than twice as economically productive as Argentina, but is significantly less urbanized. Sri Lanka is (or was, in 2020) is almost six times more economically productive than The Gambia, but is far less urbanized.

Non-urban areas can be very economically productive, especially if they have relatively high population density, good transportation networks, and proximity to larger markets. Britain’s industrial revolution itself began in rural landscapes. Although maps of the industrial revolution usually emphasize coal and iron ore deposits, industrialization was originally dependent on hydropower, which requires abundant precipitation and significant drops in elevation. Areas around the Pennine Chain, the “backbone of England,” were thus selected for the first mechanized mills, despite their lack of urban infrastructure. The first modern factory, a water-powered cotton spinning mill, was built in the village of Cromford in Derbyshire, England in 1771; others quickly followed elsewhere in the Derwent Valley. Factory owners had to build housing for their workers due to the region’s rural nature. Despite its early economic productivity, Cromford never urbanized, and today has fewer than 2,000 residents

 

 

As industrialization proceeded and coal supplanted hydropower, small and mid-sized towns in northern and central England transformed into major cities. Proximity to markets and ports allowed the factories of Lancashire to supplant those of Derbyshire, and by the second half of the nineteenth century the water-powered mills of the Derwent Valley were mostly abandoned. Today the area is a world heritage site, commemorating the industrial revolution. Currently, hydropower is being restored to make the site more economically sustainable. On August 1 of this year, the BBC reported that :

 

 

Hydroelectric power is due to return to a textile mill which helped spark the industrial revolution.

Cromford Mill in Derbyshire – built in 1771 by Sir Richard Arkwright – was the world’s first successful water-powered cotton spinning mill.

The Arkwright Society has secured a total of £330,000 from Severn Trent Water and Derbyshire County Council.

Work is due start in September with the aim of being fully operational by June 2023.

The project will involve reinstating a waterwheel and installing a 20kW hydro-turbine to power the buildings…

Urbanization, Economic Productivity, and the Industrial Revolution Read More »

Hispanic Vs. Non-Hispanic White Life Expectancy in Texas

Life expectancy generally correlates with income, but other factors also play an important role. In the United States, non-Hispanic white households earn significantly more money than Hispanic households: $74,912 vs. $55,321 in 2020 (median household income). But Hispanics outlive non-Hispanic whites. The Wikipedia article on “Race and Health in the United States” notes that “as of 2020, Hispanics Life Expectancy was 78.8 years, followed by Non Hispanic Whites at 77.6 years and Non Hispanic blacks at 71.8 Years.” The table in the same article, however, puts the figures at 78.6 for non-Hispanic whites and 82.0 for Hispanics. It also lists Hispanics as outliving non-Hispanic whites in every state except New Mexico, where the gap was only one tenth of a year.

In Texas, Hispanics can be expected to outlive non-Hispanic whites by 2.8 years. The gap between the two groups, however, varies widely by county, as can be seen in the map posted here (derived from this data source). The patterns are clear and intriguing. In the most heavily Hispanic – and quite poor – counties of south Texas, non-Hispanic whites have the advantage.  In east Texas counties with proportionally fewer Hispanics, Hispanics have a decided advantage.

 

 

 

Some of the data in this tabulation, however, must be questioned. In Lamar County, for example, Hispanics are listed as have a “100+” life expectancy, as opposed to a non-Hispanic-white figure of only 73.7 years. Lamar, a county of roughly 50,000 residents, is 8.8 percent Hispanic (4,412 persons in 2020). I have a difficult time believing that a population this large could really have a life expectancy of over 100 years. The same table also lists non-Hispanic whites in Starr County in far south Texas as having a life expectancy of over 100 years. Non-Hispanic whites only make up 1.78 percent of this county’s population. Intriguingly, Starr County’s non-Hispanic white population plummeted from 2,449 in 2010 to 1,171 in 2020. These patterns are difficult to explain and deserve further investigation.

Hispanic Vs. Non-Hispanic White Life Expectancy in Texas Read More »

Demographic Patterns in Montana (and the Rest of the United States)

This penultimate post on county-level maps of Montana and the rest of the United States examines some basic demographic patterns. We begin with sex ratio, as measured by males per females in the population. The national map shows some clear patterns, but they are not always easy to interpret. Sex ratios are high (more males than females) in the interior West and the northern and western Midwest, and are low (more females than males) across much of the lower south, in most of New England, in most major metropolitan areas, and in many counties with large Native American communities. Some of these patterns can be explained by employment opportunities. It is no surprise, for example, to see male-biased populations in the Bakken oil lands of western North Dakota or in the Permian Basin of west Texas and southeastern New Mexico. If anything, I would have expected higher figures in the latter place. Most outdoor-amenity counties in the West also have high sex ratios.

The map of sex ratios in Montana is especially difficult to interpret. The Blackfeet nation in Glacier County has a very low ratio, but not so the Native American communities of Roosevelt County in northeastern Montana. Gallatin County has a high sex ratio, as might be expected in a booming community with large number of construction jobs, but equally booming Flathead County has a low sex ratio. By the same token, some languishing Great Plains countries have high sex ratios, others low.

 

On the national map of the population over the age of 65, high levels are seen in counties with large numbers of retirees (parts of Arizona and much of Florida) and in those with declining populations marked by the out-migration of the young. Low levels or elderly people are found in counties with high birth rates and low longevity figures, and in those that attract large numbers of workers. Western counties with many farm workers, such as those in California’s San Joaquin Valley, have low proportions of residents over the retirement age. In Montana, the richest county (Gallatin) has a relatively low number of elderly residents, as do the state’s poor Native American counties. Why Prairie County in the east would have such a large percentage of elderly residents is a mystery. In 2010, its median age was 53.6. If more than 60 percent of its population was really more than 65 years-of-age in 2017, as the map indicates, there must have been some major changes in the intervening period.

 

 

 

The national map of the population under age 18 is in large part a reflection of birth rates. Here the LDS (Mormon) region of Utah and eastern Idaho stand out, as do many areas with large Hispanic populations. In Montana, counties with Native American reservations have high percentages of residents below 18 years-of-age. Western counties that attract retirees or young adult job- and amenity-seekers have relatively few children.

Demographic Patterns in Montana (and the Rest of the United States) Read More »

The Geography of Health and Longevity in Montana (and the Rest of the U.S.)

Maps of health and longevity show many of the same patterns seen on earlier maps posted in this GeoCurrents sequence. In the United States as a whole, several county-clusters of relatively low life expectancy stand out. The most prominent is in eastern Kentucky, southern West Virginia, and southern Ohio, an area mostly inhabited by Euro-Americans. Several areas demographically dominated by African Americans also post low longevity figures, including the southern Mississippi Valley and a few of the large cities that are visible on this county-level map, such as Baltimore and Saint Louis. Almost all counties with large Native American populations also have relatively low figures. Counties with Hispanic majorities, in contrast, generally have average or high levels of longevity, as is clearly visible in southern Texas. Although life expectancy tends to correlate with income, the correlation collapses in many of these areas. Hidalgo County, Texas is over 91 percent Hispanic and has a per capita income of only $12,130, making it “one of the poorest counties in the United States,” but it ranks in the highest longevity category on this map. In Camp County, in northeast Texas, the average life expectancy of white residents is only 74 years, whereas that of Hispanics is 92 years. In most Texas countries, Hispanics outlive whites. Presumably, diet and activity levels are major factors.

Life expectancy varies significantly across Montana, with some counties falling in the highest category and others in the lowest. In Montana, low figures are found in counties with large Native American populations and in the former mining and smelting counties of Silver Bow and Deer Lodge. The relatively wealthier counties of south-central Montana post high longevity figures.

The map of heart-disease deaths in the United States shows some stark geographically patterns. Rates are highest in the south-central part of the country but are also elevated across most of the eastern Midwest. Heart disease death rates tend to be lower in major metropolitan counties as well as in rural countries across much of the West and western Midwest. Many of the patterns seen on this map are difficult to interpret. Why, for example, would heart-disease death rates be much lower in western North Carolina than in eastern Tennessee? (Perhaps the obesity map, posted below, offers a partial explanation, although only by begging the question.) And why would some counties that are demographically dominated by Native Americans have very high rates whereas others, especially those in New Mexico and Arizona, have very low rates? In Montana heart disease tends to be elevated in Native American counties – and in Silver Bow (Butte). South-central Montana has low rates.

On the U.S. cancer death-rate map of 2014, high rates are clearly evident in areas demographically dominated by poor white people (central Appalachia) and poor Black people (the inland delta of the Mississippi in western Mississippi and eastern Arkansas). Low rates tend to be found in the Rocky Mountains, south Florida, and parts of the northern Great Plains. No clear patterns are evident in Montana. Gallatin and Liberty counties fall in the lowest category found in the state, yet have almost nothing in common. The small population of Liberty County, however, makes comparison difficult.

 

 

Finally, the patterns seen on the U.S. obesity map are similar to those seen on the preceding maps. The low rates found in the Rocky Mountains, extending from northern New Mexico to northwest Montana, are notable. The northern Great Plains have a higher obesity rate than might be expected based on other health indicators. Some odd juxtapositions are found on this map, with several neighboring counties of similar demographic characteristics posting very different figures. Why, for example, would Pecos County in West Texas have such higher rates than its neighbors? In Montana, it is not surprising that Gallatin County, with its youthful, outdoor focused population, has the lowest obesity rate.

The Geography of Health and Longevity in Montana (and the Rest of the U.S.) Read More »