Sunday, May 3, 2026

How Do Culture, Society, and Civilization Shape Warfare? What Is the Western Way of War?

What is this peculiar way of living that historians call “Western Civilization” or “Eurocentric Culture”? It’s difficult to define, yet after studying enough history, students are able to identify it even when they’re not able to explain it. Whatever it is, it’s no more “west” than it is north, south, or east, and it’s no more “European” than it is North American or Australian; it has its roots in such distinctly non-western places as Babylon and Persia. In fact, it has left its imprint everywhere, even in those places in which it remains a tiny oppressed minority tradition.

Western culture stands in stark contrast to the nativism, xenophobia, ethnocentrism, and chauvinism of other cultures. One distinctive characteristic of Western culture is that it is open to, curious about, diplomatically cooperative with, and eager to see the good features in other cultures.

A general discussion of how to define, describe, and explain Western Civilization is far beyond the scope of the present discussion. Instead, the question is about a narrow subtopic: What could be meant by the phrase “The Western Way of War”?

Exploring this topic, historian Victor Davis Hanson begins by listing three factors which have given Western societies advantage in warfare. The first is that civilian government is separate from, and has authority over, the military:

Constitutional government was conducive to civilian input when it came to war. We see this in ancient Athens, where civilians oversaw a board of generals, and we see it in civilian control of the military in the United States. And at crucial times in Western history, civilian overseers have enriched military planning.

Outside of Western Civilization, there is often no clear distinction between civilian government and military commanders. On a simple level, many of those who are heads of state and heads of government in the non-Western world often wear military uniforms, and military leaders often give orders to the civilian government instead of the other way around.

By contrast, in Western nations, it is often required that anyone who wishes to hold office in the civilian government must first sever any connections he might have with the military. Those who were the highest leaders in the military — presidents like Washington, Grant, and Eisenhower — had to first be humbled and resign their commissions before they could become presidents.

In the Western world, there have been moments in which non-western ways have asserted themselves: e.g., the eras of Napoleon and Hitler.

The second factor which has yielded an advantage for Western nations in combat is a unique sense of honor. Western soldiers are not honored for the number of men they’ve killed. The differences between the earliest phases of Greek history around 1000 B.C. and the classical phase around 400 B.C. are instructive. The earlier era understood the military man to be a killer, and the more he killed, the better. By the later phase, a Greek soldier did not wear trophies or souvenirs to commemorate each man he’d killed:

Western culture gave birth to a new definition of courage. In Hellenic culture, the prowess of a hero was not recognized by the number of heads on his belt. As Aristotle noted in the Politics, Greek warriors didn’t wear trophies of individual killings. Likewise, Victoria Crosses and Medals of Honor are awarded today for deeds such as staying in rank, protecting the integrity of the line, advancing and retreating on orders, or rescuing a comrade. This reflects a quite different understanding of heroism.

Another factor which Victor Davis Hanson identifies is the Western advantage in technology and economics. Outside of the Western world, societies place little value on individualism, and the individual is not encouraged or given the freedom to pursue her or his goals; individualism is not protected. In Western societies, a balance of valuing both the individual and community allows for freedom in experimentation and invention, leading to discoveries and explorations of the natural sciences, which in turn yield new technologies. At the same time, individualism offers chances for people to find ways to build, improve, and bring these technologies to market.

A third factor underlies our association of Western war with advanced technology. When reason and capitalism are applied to the battlefield, powerful innovations come about. Flints, percussion caps, rifle barrels and mini balls, to cite just a few examples, were all Western inventions. Related to this, Western armies — going back to Alexander the Great’s army at the Indus — have a better logistics capability. A recent example is that the Americans invading Iraq were better supplied with water than the native Iraqis. This results from the application of capitalism to military affairs — uniting private self-interest and patriotism to provide armies with food, supplies, and munitions in a way that is much more efficient than the state-run command-and-control alternatives.

Yet the economic advantage is also one ingredient to a political pressure on the military in Western nations, a pressure which sometimes hinders rather than strengthens military might. The economic systems of the West desire to keep military spending to a minimum. While the military expenditures of some Western nations might be large in absolute terms (e.g., the United States), they are smaller in relative terms than the military spending of many non-Western nations. Simply put, the voters want to spend enough money on the military to bring about a victory in a just war, and not a penny more.

Western Civilization is generally reluctant to start wars, perhaps sometimes for ethical reasons, but directly and explicitly for economic ones.

Another ingredient which creates a pressure for a short and inexpensive war is the concept of the volunteer army. Presently, at the beginning of the twenty-first century, the Western nations have mostly volunteer armies, while other parts of the world enforce large-scale conscriptions. In the past, Western nations often had to cajole or give incentives to their citizens to obtain sufficient soldiers. While it would be anachronistic to retroject concepts like “conscription” and “the draft” onto the constituent kingdoms which formed the Holy Roman Empire, it was the case that the emperor could not simply demand that soldiers be given to him. He had to persuade the nobles to turn over their soldiers, and he had to make a plausible case to the aristocrats that it was both in their interests and in the empire’s interests that they release their soldiers into the emperor’s service.

It is not unusual for Western military leaders to find themselves fighting a war on a budget. George Washington’s army was perpetually underfunded, as was the United States Army in the Korean War. In non-western societies, the military has an absolute priority in the economy, especially in wartime.

This pressure on the military, writes Victor Davis Hanson, is at times an advantage, and at times a disadvantage:

Western armies are impatient. They tend to want to seek out and destroy the enemy quickly and then go home. Of course, this can be both an advantage and a disadvantage, as we see today in Afghanistan, where the enemy is not so eager for decisive battle. And connected to this tradition is dissent. Today the U.S. military is a completely volunteer force, and its members’ behavior on the battlefield largely reflects how they conduct themselves in civil society. One can trace this characteristic of Western armies back to Xenophon’s ten thousand, who marched from Northern Iraq to the Black Sea and behaved essentially as a traveling city-state, voting and arguing in a constitutional manner. And their ability to do that is what saved them, not just their traditional discipline.

The West hasn’t “always been victorious in war.” It has lost a number of them. On balance, however, the distinctive characteristics of Western warfare have often been advantageous.

In the early years of the twenty-first century, changes in technology, economics, diplomacy, global trade, and society itself are changing these centuries-old dynamics. The West may have to recalibrate to maintain its ability not only to defend itself, but also to defend its concepts and way of life. One of these changes is the rapid transfer of knowledge, and a second change is leveraging raw-material exports and cheap labor (low tech) which are needed for high-tech civilization:

There have been two developments over the last 20 years that have placed the West in a new cycle. They have not marked the end of the Western way of war, but they have brought about a significant change. The first is the rapid electronic dissemination of knowledge — such that someone in the Hindu Kush tonight can download a sophisticated article on how to make an IED. And the second is that non-Western nations now have leverage, given how global economies work today, through large quantities of strategic materials that Western societies need, such as natural gas, oil, uranium, and bauxite. Correspondingly, these materials produce tremendous amounts of unearned capital in non-Western countries — and by “unearned,” I mean that the long process of civilization required to create, for example, a petroleum engineer has not occurred in these countries, yet they find themselves in possession of the monetary fruits of this process. So the West’s enemies now have instant access to knowledge and tremendous capital.

Western societies face additional challenges in warfare in the twenty-first century. Victor Davis Hanson lists five of them:

One of these checks is the Western tendency to limit the ferocity of war through rules and regulations. The Greeks tried to outlaw arrows and catapults. Romans had restrictions on the export of breast plates. In World War II, we had regulations against poison gas. Continuing this tradition today, we are trying to achieve nuclear non-proliferation. Unfortunately, the idea that Western countries can adjudicate how the rest of the world makes war isn’t applicable anymore. As we see clearly in Iran, we are dealing with countries that have the wealth of Western nations (for the reasons just mentioned), but are anything but constitutional democracies. In fact, these nations find the idea of limiting their war-making capabilities laughable. Even more importantly, they know that many in the West sympathize with them — that many Westerners feel guilty about their wealth, prosperity, and leisure, and take psychological comfort in letting tyrants like Ahmadinejad provoke them.

Western democracies pose questions to their militaries: Western nations want to know that they are fighting a “just” war, that they can ensure humane treatment of enemy combatants and enemy civilians, and that they use weapons which are not cruel and unusual, and similar concerns. Admittedly, Western nations have occasionally — rarely — violated these ethical standards: there have been instances in which POWs were mistreated, tortured, or even executed. It is telling that such instances provoke a loud and universal outcry from civilians, politicians, and military leaders.

By contrast, warfare as conducted by those who reject Western standards is assumed to include torture, inhumane treatment, and careless action against POWs and civilian populations. Ad hoc, large-scale executions are conducted without any legal or ethical qualms. Indeed, those who reject Western standards do not merely tolerate brutality in their militaries, they demand it.

It is, of course, somewhat misleading to write of “the West” as if it were homogenous and indivisible. Victor Davis Hanson continues:

The second check on the Western way of war is the fact that there is no monolithic West. For one thing, Western countries have frequently fought one another. Most people killed in war have been Europeans killing other Europeans, due to religious differences and political rivalries. And consider, in this light, how fractured the West is today. The U.S. and its allies can’t even agree on sanctions against Iran. Everyone knows that once Iran obtains nuclear weapons — in addition to its intention to threaten Israel and to support terrorists — it will begin to aim its rockets at Frankfurt, Munich, and Paris, and to ask for further trade concessions and seek regional hegemony. And in this case, unlike when we deterred Soviet leaders during the Cold War, Westerners will be dealing with theocratic zealots who claim that they do not care about living, making them all the more dangerous. Yet despite all this, to repeat, the Western democracies can’t agree on sanctions or even on a prohibition against selling technology and arms.

The West’s strengths are also its weaknesses: one of those strengths is freedom — freedom to disagree, to exercise one’s own agency, or in the case of a nation, to exercise the nation’s agency, even if it means disagreeing with an ally. Alliances among Western nations are fussy. Alliances among non-western nations are smoothly functioning, because one nation is superior and the rest of the nations in the alliance are vassal states.

There is a great disparity between the technological savvy required to design and build sophisticated weaponry, and the savvy required to use it. A ragtag group of guerilla fighters can’t design and manufacture RPGs, or even understand the details of their physics and electronics, but such a group can use them efficiently and accurately.

The technological advances made possible by the West’s societal structure are easily appropriated to work against the West. The West’s open and free culture of experimentation can be used by a rigid and controlling culture against the West.

The third check is what I call “parasitism.” It is very difficult to invent and fabricate weapons, but it is very easy to use them. Looking back in history, we have examples of Aztecs killing Conquistadors using steel breast plates and crossbows and of Native Americans using rifles against the U.S. Cavalry. Similarly today, nobody in Hezbollah can manufacture an AK-47 — which is built by Russians and made possible by Western design principles — but its members can make deadly use of them. Nor is there anything in the tradition of Shiite Islam that would allow a Shiite nation to create centrifuges, which require Western physics. Yet centrifuges are hard at work in Iran. And this parasitism has real consequences. When the Israelis went into Lebanon in 2006, they were surprised that young Hezbollah fighters had laptop computers with sophisticated intelligence programs; that Hezbollah intelligence agents were sending out doctored photos, making it seem as if Israel was targeting civilians, to Reuters and the AP; and that Hezbollah had obtained sophisticated anti-tank weapons on the international market using Iranian funds. At that point it didn’t matter that the Israelis had a sophisticated Western culture, and so it could not win the war.

When Victor Davis Hanson explains his next point, he refers to Michael Moore, a filmmaker who was famous for his opposition to America’s involvement in the Iraq War, i.e., the version of the Iraq War which started in March 2003; Hanson wrote these words in 2009. The reader will understand the dynamic and apply it to later eras.

The freedoms of thought and speech which the West fosters are freedoms which can be used against it. As the West fights to preserve those freedoms, those within the West will oppose such fighting, exercising their rights to free speech to oppose the protection of free speech.

A fourth check is the ever-present anti-war movement in the West, stemming from the fact that Westerners are free to dissent. And by “ever-present” I mean that long before Michael Moore appeared on the scene, we had Euripides’ Trojan Women and Aristophanes’ Lysistrata. Of course, today’s anti-war movement is much more virulent than in Euripides’ and Aristophanes’ time. This is in part because people like Michael Moore do not feel they are in any real danger from their countries’ enemies. They know that if push comes to shove, the 101st Airborne will ultimately ensure their safety. That is why Moore can say right after 9/11 that Osama Bin Laden should have attacked a red state rather than a blue state. And since Western wars tend to be fought far from home, rather than as a defense against invasions, there is always the possibility that anti-war sentiment will win out and that armies will be called home. Our enemies know this, and often their words and actions are aimed at encouraging and aiding Western anti-war forces.

The last point which Victor Davis Hanson makes regarding the obstacles which the West faces in warfare is this: When a culture that values human life, which sees something sacred in every human life, which hopes to honor every human life, which sees a dignity in every human life, and which respects every human life — when such a culture encounters another culture which sees human life as expendable to be reckoned cold-bloodedly as one of several costs in warfare — then there is an imbalance. One of the strengths of Western culture is that it values human life; this strength can seemingly become a weakness in the face of an opponent who places little value on human life.

Finally and most seriously, I think, there is what I call, for want of a better term, “asymmetry.” Western culture creates citizens who are affluent, leisured, free, and protected. Human nature being what it is, we citizens of the West often want to enjoy our bounty and retreat into private lives — to go home, eat pizza, and watch television. This is nothing new. I would refer you to Petronius’s Satyricon, a banquet scene written around 60 A.D. about affluent Romans who make fun of the soldiers who are up on the Rhine protecting them. This is what Rome had become. And it’s not easy to convince someone who has the good life to fight against someone who doesn’t.

To put this in contemporary terms, what we are asking today is for a young man with a $250,000 education from West Point to climb into an Apache helicopter — after emailing back and forth with his wife and kids about what went on at a PTA meeting back in Bethesda, Maryland — and fly over Anbar province or up to the Hindu Kush and risk being shot down by a young man from a family of 15, none of whom will ever live nearly as well as the poorest citizens of the United States, using a weapon whose design he doesn’t even understand. In a moral sense, the lives of these two young men are of equal value. But in reality, our society values the lives of our young men much more than Afghan societies value the lives of theirs. And it is very difficult to sustain a protracted war with asymmetrical losses under those conditions.

The examples offered reflect the 2009 context; the reader will provide more recent examples to illustrate the principles.

In sum, Victor Davis Hanson has compiled a list of West’s strengths and weaknesses, advantages and disadvantages, and how the relative impact of them has changed in the last century. He closes with this sentence:

We who created the Western way of war are very reluctant to resort to it due to post-modern cynicism, while those who didn’t create it are very eager to apply it due to pre-modern zealotry.

The challenges which face the West include these: How does the West re-frame and re-phrase its values in the light of the last century’s developments? Which strategies and tactics honor those values, avoid the decline into barbarism, and are powerful enough to protect those values?

If the West can’t find answers to those questions, the planet could lose a heritage which strives for human equality. The West has been able to articulate what all people in all cultures seek: peace, prosperity, freedom, and justice. If the West falls, the yearnings of all people will be unfulfilled.

Wednesday, December 31, 2025

The Intellectual Foundations of Anti-Fascist Resistance

The history of the 1930s and 1940s tells of the networks of individuals and small groups inside Germany which worked to effectively undermine, hamper, and hinder the Nazi war effort and Nazi systematic destruction of human life. This resistance movement had measurable effects: thousands of Jewish lives were saved as Jews were hidden or smuggled out of the Nazi-controlled territory; Germans deliberately produced munitions to substandard specifications, disrupted railways, and cut telegraph and telephone lines, all to make the military less successful.

What motivated these Germans? Some answers to that question are obvious: they desired to free Germany from Nazi oppression and end the war that was taking German lives; they had the rather obvious moral intuition that murdering millions of people — not only Jews, but Russians, Poles, religious groups like the Jehovah’s Witnesses, communists, libertarians, etc. — was wrong and should be stopped.

The official name for Naziism is ‘National Socialism’ and opposing it was risky. Many members of the resistance died for their beliefs. Given the risk involved, those who opposed National Socialism spent time and energy exploring and articulating their motives. One example is the writings of the White Rose group at the University of Munich.

The leaflets produced and distributed by the White Rose are written in a reflective and educated tone. They include references to Chinese philosophers, Greek and Roman thinkers like Aristotle and Cicero, the Old Testament, classic German poets like Goethe and Novalis, and most most frequently, allusions to the New Testament and to Jesus. The members of the anti-fascist resistance had a carefully reasoned and documented worldview which caused them to take an uncompromising stance against Adolf Hitler and his National Socialism.

Other members of the resistance also wrote about their motivations. Perhaps most famously, Dietrich Bonhoeffer wrote extensively during the prewar and wartime years.

Examining the statements of those who took steps against the Nazis, both similarities and differences appear. Bonhoeffer was a Lutheran theologian; Claus von Stauffenberg was devout Roman Catholic; still other resistance members were Calvinist, Protestant, or Evangelical. The members of the White Rose group included all the above and followers of eastern Orthodoxy.

As journalist Uwe Siemon-Netto writes

It was by and large a movement of Christians, Protestant as well as Roman Catholic.

He gives details:

As far back as 1933 some 6,000 pastors, one-third of Germany’s entire Protestant clergy, had joined the anti-Nazi Pfarrernotbund (Pastors Emergency League). Interestingly, this number corresponds roughly to the proportion of ministers actively opposed to the totalitarian authorities in East Germany after the war, according to Harald Krille, editor-in-chief of Glaube und Heimat, a weekly Protestant newspaper for central Germany.

It was in 1933 that Hitler seized power, so Siemon-Netto’s data shows that the resistance began immediately. The members of what would become the resistance had been watching and analyzing carefully. They didn’t wait to see what would happen, or give National Socialism a chance to prove itself. They knew and understood the actions that Hitler would take. They were ahead of the curve in perceiving and responding to evil on a large scale.

“Thousands of clergymen critical of Hitler,” Siemon-Netto continues, “were jailed, forbidden to preach, or drafted and often sent to the Russian front, from whence most did not return.”

Yet these are the easy and obvious examples: theologians and clergy. The biggest part of the resistance was composed of people who didn’t get a paycheck from a church, and didn’t officially represent organized religion.

Claus von Stauffenberg was one of many officers who were part of resistance: they were a large part of organizing assassination attempts on Hitler, and sabotaging the war effort. Military officers saw Hitler as carelessly wasting the lives of young German soldiers. Claus von Stauffenberg played a central role in a plot to assassinate Hitler, and was executed when the plot failed. Claus von Stauffenberg understood the risk he was taking, and considered it worthwhile. There were at least 42 attempts to assassinate Hitler from 1932 to 1944, indicating the level of energy in the resistance movement.

Carl Friedrich Goerdeler studied economics at the University of Tübingen. His career included a role as a bureaucrat in the city government of Solingen, the mayor of Königsberg, and the mayor of Leipzig. When Hitler seized power in 1933, Goerdeler first attempted to influence Hitler; he wrote him memoranda on various policy topics, hoping to change Hitler’s plans. Goerdeler resisted pressure to join the Nazi Party, and by 1936 took an openly hostile view of Hitler and the National Socialist government. Starting in 1938, Goerdeler was involved in a series of plots to overthrow the National Socialist government and assassinate Hitler. Like Claus von Stauffenberg, Goerdeler paid with his life.

Both Goerdeler and von Stauffenberg represent a large segment of the resistance, perhaps the majority: They hadn’t studied theology at the university, weren’t employed by a church, didn’t represent a church, and didn’t receive a paycheck from a church. They came from ordinary, worldly, walks of life.

Yet both of them were motivated by deeply spiritual conceptions of life. Goerdeler was a Lutheran and von Stauffenberg was a Roman Catholic. Both had absorbed the faiths of their families in childhood, and both of them retained this faith as adults. These two men probably could not have expressed their faiths in the precise academic language of professors, but they had absorbed and internalized their faiths to the point at which they instinctively recognized evil when they saw it, and to the point at which they knew that they were morally obliged to take action against such evil, even if it would cost them their lives.

About Goerdeler’s conceptual framework, Uwe Siemon-Netto writes:

His sense of order made him an early opponent of National Socialism. In very Lutheran terms, Goerdeler called the anti-Semitic outrages of the Hitler regime an “unbearable offense to civilization and a manifestation of mob rule.” Driven by his own motto, omnia restaurare in Christo, Goerdeler strove to topple Hitler.

In this framework, resistance was not optional; it was a duty to defy the National Socialist government:

A Lutheran ethos motivated men like Goerdeler. Goerdeler’s internalized Lutheranism told him that Germans must rid themselves of this evil.

He didn’t write or talk about spirituality; he lived it:

Goerdeler, who knew nothing of Luther’s theology of the cross, lived a theology of the cross.

From these examples, it is clear that those who resisted the Nazis weren’t reacting out of simple emotion. They were drawing from a deep well of intellectual resources. They received the support they needed to take the ultimate risks and endure the ultimate sacrifices.

Wednesday, October 8, 2025

Competing Liberalisms: From Locke to Mill and Beyond

The word ‘liberalism’ and the history of liberalism are complicated and surprising. Historians are careful to distinguish among the different definitions of ‘liberalism’ for this reason. A political thinker who was a liberal in the 17th century is different from one in the 18th century, who in turn is different from one in the 19th century. It will be important to sort and chart these distinct meanings of the word.

John Locke was a liberal; Hillary Clinton was a liberal. But the two are different and distinct from each other.

Although historians can identify isolated strands of liberal thought in previous centuries, and even millennia ago, modern political liberalism emerged in the late 1600s. This era includes the Glorious Revolution and most of John Locke’s political writings.

The initial version of liberalism, shaped by Locke and others, emphasized the task of government being to protect every citizen’s life, liberty, and property. The etymology of ‘liberalism’ and ‘liberty’ make clear the goals and values of liberalism. In one famous passage, Locke writes that “no one can be put out of” an initial free state of nature “and subjected to the political power of another, without his own consent.”

Along with the idea that a government derives its legitimacy from the consent of the governed, other key ideas in this original Classical Liberalism included an emphasis on keeping government in check so that it did not encroach on the rights of the individual, and the articulation of such rights as freedom of speech, freedom of association, freedom of religion, and free markets. A laissez-faire economic approach was understood as protecting the individual’s right to do as she or he pleases with her or his property.

Note that Locke posits that humans have, or had, an initial “state of nature,” which he hypothesizes to be a state of liberty. Other political philosophers, including some who disagree sharply with Locke, agree on this point, that there is or was a “state of nature” which is prior to a social contract and the institution of government. The “state of nature” might be logically prior or temporally prior to the formation of a governed state.

For example, Thomas Hobbes developed a concept of the “state of nature” which was sinister and violent, while Locke’s state of nature was a bit more cheerful.

This initial version of modern political liberalism — note that the late 1600s are reckoned as ‘modern’ — arose and developed in the British Isles, and is often called ‘Classical Liberalism’ as Maurice Cranston writes:

Traditional English liberalism has rested on a fairly simple concept of liberty — namely, that of freedom from the constraints of the state. In Hobbes’s memorable phrase, “The liberties of subjects depend on the silence of the law.” In general, however, English liberals have always been careful not to press this notion to anarchist extremes. They have regarded the state as a necessary institution, ensuring law and order at home, defense against foreign powers, and security of possessions — the three principles Locke summarized as “life, liberty and property.”

Buried in Locke’s view of the world is an Enlightenment concept of human beings as rational, knowing, deliberative agents. Locke, like nearly every other political philosopher, had a notion of human nature, and his conception of society and government arose from his idea of human nature.

The concept of a human being as a deliberate, rational, knowing agent is foundational for free market economics.

Central to this original version of Classical Liberalism is a sharp distinction between society and government. In chapter 19 of Locke’s Second Treatise (1689), which runs from paragraph 211 to paragraph 243, the word ‘government’ occurs approximately 48 times, the word ‘society’ approximately 43 times, and the words ‘free, freedom, liberty’ approximately 29 times. In the chapter, Locke makes a detailed effort to distinguish between society and government.

Following Locke, Thomas Paine emphasizes this distinction and scolds the writers who fail to note it. In 1776 he explained:

Some writers have so confounded society with government, as to leave little or no distinction between them; whereas they are not only different, but have different origins. Society is produced by our wants, and government by our wickedness; the former promotes our happiness positively by uniting our affections, the latter negatively by restraining our vices.

Paine continues, writing that society “encourages” communication or dealings between individuals or groups, while government “creates distinctions.” Society “is a patron,” but government is “a punisher.”

Classical Liberalism proposes that the task of political philosophy is to develop structures which will confine government to its proper province and prevent it from encroaching up society’s realm, as Maurice Cranston notes:

English liberals have also maintained that the law can be used to extend the liberties of subjects insofar as the law is made to curb and limit the activities of the executive government. Thus, for example, the English laws of habeas corpus, of bail, and of police entry and arrest all constrain or restrain the executive and, in so doing, increase the freedom of the people. Some instruments of constitutional law have a similar effect.

The Classical Liberalism of John Locke had a profound influence on the founding of the United States, leading to the abolition of slavery and the universal suffrage for all citizens, including women and formerly enslaved people.

Eventually, a competing form of liberalism arose to contend with the original form of Classical Liberalism. A central author in this new movement was J.S. Mill. In 1859, advocated a schema which inverted the logic of Classical Liberalism. Instead of limiting and curbing government power so that both the individual and the society would be free to flourish, J.S. Mill wanted to empower government to restrain society. Mill reasoned that society, with its unregulated markets and its political principle of majority rule, was the true danger to the freedom of the individual.

This new form of liberalism, promoted by Mill and others, rested on a subtly but importantly different understanding of freedom.

A “liberalism” that once argued for economic freedom, and for the government to refrain from controlling society, now argued for the exact opposite. Maurice Cranston explains:

The traditional form of English political liberalism naturally went hand in hand with the classical economic doctrine of laissez-faire.

“Toward the end of the nineteenth century, however, certain radical movements and certain English liberal theorists,” continues Maurice Cranston, propounded “a different — as they claimed, broader — concept of freedom, which was, to a large extent, to prove more popular in the twentieth century than traditional English liberalism with its economic gospel of laissez-faire.”

The split between Classical Liberalism and J.S. Mill’s liberalism each birthed a handful of political and economic movements which would struggle with each other through most of the twentieth century.

The newer form of liberalism was less about protecting freedom and more about providing comfort. Indeed, it was willing to sacrifice freedom.

One of Mill’s famous sayings seems to support liberty:

the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.

The slippery slope in Mill’s argumentation is hiding behind the word ‘harm’ in this well-known quotation. ‘Harm’ had long been understood as compromising or undermining the protection of an individual’s life, liberty, and property. But now Mill would interpret ‘harm’ in a broader way. ‘Harm’ would be the failure to intervene into the life of the individual, or into the life of society. Such intervention was, according to Mill, morally obligatory, in order to improve the life of the individual.

Liberalism had gone from protecting the life of the individual to improving the life of the individual. While an improved life might seem like a good thing, troubling questions soon appeared: Who decided what was truly an improvement? Who decided which improvements were necessary? How would this improvement violate the rights of the individual, if she or he didn’t want a particular improvement? And who would pay for the improvements?

Maurice Cranston describes the tensions between the two groups of liberals:

The central aim of this new school was utilitarian — namely, freeing men from misery and ignorance. Its exponents believed that the state must be the instrument by which this end was to be achieved. Hence, English liberal opinion entered the twentieth century in a highly paradoxical condition, urging, on the one hand, a freedom which was understood as freedom from the constraints of the state and, on the other, an enlargement of the state’s power and control in order to liberate the poor from the oppressive burdens of poverty.

Liberalism had split into two movements. The two contradicted each other. A single liberal movement soon became a practical impossibility, and eventually, there were more than two versions of liberal movements, both in England and around the world. A significant number of these movements did not use the word ‘liberal’ but rather wore labels like communist, socialist, progressive, etc.

Before the rise of J.S. Mill’s new version of liberalism, the original Classical Liberalism had a good career, especially in the United States, but also in various nation-states around the world, as Patrick Deneen writes:

A political philosophy conceived some 500 years ago, and put into effect at the birth of the United States nearly 250 years later, was a wager that political society could be grounded on a different footing. It conceived humans as rights-bearing individuals who could fashion and pursue for themselves their own version of the good life. Opportunities for liberty were best afforded by a limited government devoted to “securing rights,” along with a free-market economic system that gave space for individual initiative and ambition. Political legitimacy was grounded on a shared belief in an originating “social contract” to which even newcomers could subscribe, ratified continuously by free and fair elections of responsive representatives. Limited but effective government, rule of law, an independent judiciary, responsive public officials, and free and fair elections were some of the hallmarks of this ascendant order and, by all evidence, wildly successful wager.

The success of Classical Liberalism for a century or two in various locations around the globe is due to its core drive for freedom and liberty. Practical steps toward a “weak” or “limited” government made such freedom and liberty possible.

The vision of freedom was nearly universally appealing, allowing Classical Liberalism to appear in nations of various cultures, various religions, and various languages: a global phenomenon. Patrick Deneen expounds on the attractiveness of the doctrine:

The deepest commitment of liberalism is expressed by the name itself: liberty. Liberalism has proven both attractive and resilient because of this core commitment to the longing for human freedom so deeply embedded in the human soul. Liberalism’s historical rise and global attraction are hardly accidental; it has appealed especially to people subject to arbitrary rule, unjust inequality, and pervasive poverty. No other political philosophy had proven in practice that it could fuel prosperity, provide relative political stability, and foster individual liberty with such regularity and predictability.

As effective as Classical Liberalism had been in the process of moving the world into modernity, it yet had within itself the seeds of its own downfall. The liberty of each individual to propound her or his own worldview included the worldviews which undermine a stable society. Those worldviews were translated into action. As society was sabotaged, civic chaos began; the response to the turmoil was increased governmental powers to manage the social instability.

Classical Liberalism, which began by restraining government power to protect individual liberty, had given birth to a movement which necessitated increased government intervention into the life of the private citizen.

Instead of society working to limit government’s ability to shackle the individual, the new order included increasing the government’s power to shackle the salutary ability of society to shape a communal life conducive to individual development.

Patrick Deneen narrates how liberalism destroyed both itself and society:

Liberalism’s founders tended to take for granted the persistence of social norms, even as they sought to liberate individuals from the constitutive associations and education in self-limitation that sustained these norms. In its earliest moments, the health and continuity of families, schools, and communities were assumed, while their foundations were being philosophically undermined. This undermining led, in turn, to these goods being undermined in reality, as the norm-shaping power of authoritative institutions grew tenuous with liberalism’s advance. In its advanced stage, passive depletion has become active destruction: remnants of associations historically charged with the cultivation of norms are increasingly seen as obstacles to autonomous liberty, and the apparatus of the state is directed toward the task of liberating individuals from such bonds.

What will the twenty-first century bring for liberalism? Will the struggle between Classical Liberalism and J.S. Mill’s liberalism continue? Will one or the other of those two dominate? Will the tension continue to be framed as one between individual liberty and governmental interventionism? Which political leaders, and which policies, correspond to Classical Liberalism? Which correspond to J.S. Mill?

In any event, the only way to understand the political dynamics of today and tomorrow is to study the past.

Friday, August 16, 2024

A New Empire Takes Form: The End of the HRE and the Beginning of the Austrian Empire

The Holy Roman Empire (HRE) is a difficult thing to explain. Its beginning does not have a precise date, as it gradually emerged from the remnants of the Carolingian and Ottonian Empires. As the old joke frames it, it was not Holy because it was primarily a secular organization decorated by a superficial relationship to Christian spirituality; it was not Roman, because it was a Central European alliance which only for a few brief years included Rome within its territorial boundaries; it was not an empire, because although its emperors bore that title, they did not have the absolute dictatorial power of emperors in ancient times, but rather were elected to their office and could act only after cobbling together a consensus from those who had elected them.

One clear fact about the HRE is its ending. A series of diplomatic and military maneuverings terminated it in 1806.

If the nature and structure of the HRE are murky and ambiguous, the system and organization of what followed it is even less clear — and what followed it was the Austrian Empire.

The Holy Roman Emperor Franz II declared the Austrian Empire into being in 1804, as a response to Napoleon having declared himself to be the Emperor of the French Empire. Franz II declared that the throne of the Austrian Empire would be hereditary, in contrast to the HRE’s elected emperorship.

The Austrian Empire didn’t come into full effect until Franz II dissolved the HRE in 1806. Napoleon had demanded that Franz II abdicate. Franz II issued a document, the wording of which seemed to dissolve the empire. Rather than abdicate, Franz II may have hoped to thwart any ambitions which Napoleon may have had to be the Holy Roman Emperor. Perhaps Franz II figured that dissolving the empire was the only way to prevent Napoleon from becoming the next Holy Roman Emperor. Historians continue to investigate the exact intentions of both Franz II and Napoleon.

In any case, the HRE came to an end, and Franz was now the Emperor of Austria. He is therefore known both as Franz II of the HRE and as Franz I of the Austrian Empire. Such is the numbering system of hereditary monarchies.

Like the HRE before it, the Austrian Empire had a misleading name. It was not only Austrian, but contained a number of nations: Hungary, Slovakia, and Bohemia; parts of Italy, Poland, Romania, Yugoslavia, and Germany. It was a multi-ethnic, multi-lingual, and multi-national empire.

The diversity within the Austrian Empire was both a strength and a weakness. It fueled intellectual creativity and accessed a large range of landscapes and their natural resources. But, again like the HRE, the emperor was obliged to balance the competing interests of the competing nations within his empire, and in the course of these negotiations sometimes was sometimes obliged to compromise his own interests as well.

Perhaps it was inevitable that the Austrian Empire would be characterized by constant change and negotiation, because the creation of the empire was itself an act of compromise, as historian Benjamin Curtis writes:

Several events then provoked the final, flailing expiration of the Holy Roman Empire. In 1804 Napoleon was declared emperor of France, which was a terrible affront to Franz, who could not abide that this Corsican upstart would place himself on a level equal to the prestige of the Habsburgs. Franz was also justifiably worried that Napoleon would either take the German imperial title or dissolve the German empire, which would then leave the Habsburgs actually inferior in rank. Thus in August 1804 Franz proclaimed an Austrian Empire, giving himself an ambitious new title without changing much else about his realms. In fact, in his proclamation, Franz explicitly recognized that he ruled over several states, and promised that he would not change any of their constitutions. The legality of this unilateral proclamation was questionable, but it received solid support from the aristocracy and the rest of his subjects, even in Hungary. Franz’s assumption of a new imperial title was a naked play for dynastic honor, to ensure that he would remain an emperor regardless of what Bonaparte called himself.

The Austrian Empire cannot be separated from the Habsburg dynasty. To study one is to study the other. The Habsburg family created the Austrian Empire, and sat on the throne for its entire existence. The Habsburg dynasty had dominated the HRE prior to the advent of the Austrian Empire, and the Habsburgs would dominate the form of state which finally replaced the Austrian Empire.

From 1273 until the HRE ended in 1806, there were only a few brief periods during which the Habsburgs did not occupy the throne of the HRE. Other dynasties had ruled the HRE prior to 1273.

As Holy Roman Emperors, the Habsburgs ruled a large and changing collection of kingdoms, duchies, principalities, and semi-independent city-states. The last category were called “free imperial cities” and were self-governing, not subject to any local prince or king, but rather representing themselves directly to the Holy Roman Emperor. These cities were successful examples of democracy.

Of all territories collected together into the HRE, the home territory of the Habsburgs had a special status, different and apart from other parts of the HRE, like Tirol, Hungary, Slovenia, Slovakia, etc.

All the other nations in the HRE had their own monarchs, or freely-elected governments in the case of the imperial cities, but the home territory of Habsburgs was ruled directly by the dynasty. That territory was Austria. So it was reasonable that, when it was necessary to match Napoleon’s bravado, and when it two years later seemed that the HRE was at an end, Austria would form the basis for a new Habsburg empire.

Napoleon’s military aggression shaped European geopolitics from 1800 to 1815, as the French Revolution had shaped them from 1789 to 1799. During those 25 years, when the nations of Europe could agree on little else, they agreed to cooperate in their opposition to Napoleon. Seven coalition wars were fought, the first two defending against the French Revolution, and the next five against Napoleon. In the midst of these wars, the Austrian Empire was born and tested.

Describing early years of the Austrian Empire, historian A.J.P. Taylor refers to Franz by the anglicized form of his name:

In 1804 the lands of the House of Habsburg at last acquired a name: they became the “Austrian Empire.” This threatened to be a death-bed baptism. In 1805 the Habsburg dream of universal monarchy gave a last murmur, and Francis aspired to defend old Europe against Napoleon. Austerlitz shattered the dream, destroyed the relics of the Holy Roman Empire, and left Francis as, at best, a second-class Emperor. Austria emerged at any rate as an independent country and strove for an independent policy. The result was the war of 1809, the attempt to discover a new mainspring of action in leading the liberation of Germany. This war almost destroyed the Austrian Empire. Napoleon appealed for a Hungarian revolt and even sketched plans for a separate Kingdom of Bohemia. What saved Austria was not the strength of her armies nor the loyalty of her peoples, but the jealousy of her Imperial neighbors: Alexander of Russia and Napoleon could not agree on terms of partition and were content with frontier gains — Alexander carried off eastern Galicia, and Napoleon turned the South Slav lands into the French province of Illyria. The events of 1809 set the pattern of Austrian policy for forty years, or even for the century of the Empire’s existence. Austria had become a European necessity. In harsher terms, the Great Powers were agreed that the fragments surviving from the Habsburg bid for universal monarchy were more harmless in Habsburg hands than in those of some new aspirant to world empire. The nature of the Austrian Empire was clearly shown in the contrast between Austria and Prussia. Both were restored to the ranks of the Great Powers on the defeat of Napoleon; but Prussia carried herself there by harsh reforms, Austria by pliant diplomacy and ingenious treaties.

Founded in 1804, entering into full effectiveness in 1806, the Austrian Empire faced its first big test in The War of the Fifth Coalition. The coalition attempted to defend itself against Napoleon’s attacks, but for the most part unsuccessfully. The decisive battle was at Austerlitz. Austerlitz is located near the point at which the borders of the Czech Republic, Slovakia, and Austria meet. (Austerlitz is also called Slavkov u Brna; the Czech Republic is roughly the same territory as Bohemia.)

The coalition loss at Austerlitz led to a treaty signed in Pressburg. (Pressburg is also known as Bratislava.) This treaty represented a loss and humiliation for the Austrian Empire. The nineteenth century can be viewed as the long tragic decline of the Austrian Empire, which lost several territories in the Treaty of Pressburg.

When did the Austrian Empire end? Some historians suggest that it ended in 1867, with the Austro-Hungarian Compromise. After that year, Hungary was seen as its own independent nation-state, while the remainder of the territories of the Austrian Empire continued under the same name. In an arrangement known as a ‘personal union,’ the same monarch ruled both.

Other historians argue that the Compromise of 1867 did not end the Austrian Empire, but rather merely redesigned it. After this year, Austria-Hungary had the same territory, the same monarch, and many of the same governmental agencies as it did before that year. The Hungarian desire for independence drove the compromise and was partially satisfied by it, but the Habsburgs continued as before to rule a collection of diverse nations, ethnicities, cultures, and languages.

At the latest, then, the Austrian Empire ended on November 11, 1918, when the last Austrian Emperor, Karl, abdicated.

Thursday, June 20, 2024

Varieties of Shame, Varieties of Guilt, and Why They Matter

Psychologists, counselors, and other mental health professionals recognize the significance of shame and guilt. These are powerful and sometimes destructive. But not all guilt is the same; not all shame is the same. Distinguishing between different types of guilt and different types of shame allows the reader to more accurately analyze in any particular concrete situation.

One type of shame is perception — real or imagined — that the people in one’s environment are aware of one’s flaws or sins. A man is ashamed if others learn that he stole money from an orphanage, or embezzled from his employer, or has a secret addiction. Such shame is often occasioned by the sudden revelation of one’s secrets. This type of shame can be called ‘outer-directed.’

A different version of shame is “inner-directed.” This is a self-valuation based upon one’s flaws or sins. One might review one’s actions, and label one’s self as worthless or evil. Such an internalized shame might arise when the distinction between one’s actions and one’s self is not made clear. Although one might have committed horrible crimes, it is those crimes which are evil, not one’s self. Ultimately, actions, not people, are judged to be good or evil.

There are perhaps other varieties of shame, in addition to these two.

Likewise, there are different sorts of guilt.

One kind of guilt is the reality of an individual’s having committed a sin or crime. This is a physical objective truth. A man either did, or did not, rob a bank. It is not a matter of “feeling” that the man is guilty. It is a verifiable fact. This can be called ‘ontological guilt.’

By contrast, another type of guilt is an emotion: this can be called ‘phenomenological guilt.’ One perceives one’s self to be guilty. This perception can be accurate, or it can be false.

Shame and guilt are painful. Externalized shame is the experience of being publicly humiliated. Inner-directed shame is the experience of self-loathing. Ontological guilt is the fact that one has done evil. Phenomenological guilt is to feel — accurately or mistakenly — that one has done evil.

These concepts are seen in specific situations: in Dostoevsky’s Crime and Punishment, the main character, Raskolnikov, repeatedly tries to persuade himself that he’s done nothing wrong, and therefore need not feel guilt. Yet a phenomenological guilt keeps reappearing in his mind, and as his attempts to repress it become more frantic, the sublimated guilt is the hidden cause of his bizarre thoughts, words, and deeds. Finally, persuaded by his friend Sofya, he confesses the crime which he has committed. One interpretation of Dostoevsky’s plot is this: to get rid of the phenomenological guilt — i.e., to get rid of the feelings of guilt — one must get rid of the actual guilt — i.e., one must get rid of the ontological guilt. Confession is the first step in a process which will eventually eliminate the ontological guilt. After his confession, Raskolnikov embraces the sentence given to him. Finally, throughout Lent, he wrestles mentally with accepting the facts that he committed the crime and is justly imprisoned. At Easter Time, he and Sofya communicate their love for each other, and he experiences a sense of renewal. Later, he meditates on New Testament texts; Sofya had given him a Bible. He experiences an internal resurrection and regeneration.

Raskolnikov first had to recognize that his phenomenological guilt was indeed accurate. Then he confessed his crime and accepted his sentence; this was not a simple act, but required a great deal of internal wrestling. Finally, he had to receive forgiveness and acceptance, both from human beings, and from God. His ontological guilt thus removed, his phenomenological guilt also disappeared.

A different example shows how shame is calculated into people’s decisions and actions.

In the works of Homer and especially of Virgil, characters commit acts which strike the reader as amoral and as fueled alternatingly by rage or by fear. Individuals like Aeneas seem almost sociopathic or psychopathic. These characters are acting in ways calculated either to avoid or to eliminate a sense of outer-directed shame. They have a hierarchy of values in which avoiding outer-directed shame is at or near the top.

These Virgilian characters are seeking honor, which is an inverse of shame. An outer-directed sense of honor is to be praised by one’s fellows. Entire cultures are sometimes labeled as ‘honor-shame’ cultures, because this polarity drives their ethical thinking as well as their personal ambitions.

Correspondingly, an inner-directed sense of honor would be a healthy form of self-regard, and the antidote of an inner-directed sense of shame. To acknowledge the reality of one’s misdeeds, to acknowledge the evil of those deeds, to accept the consequences for those deeds, and yet not loath one’s self, is a salutary inner-directed sense of honor.

To complete the symmetry, the opposite of guilt is innocence: the two types of guilt, ontological and phenomenological, would be balanced by the same two types of innocence. To be ontologically innocent would be to have in fact not committed the sin or sins in question. To be phenomenologically innocent would be to believe or to feel that one is innocent, quite apart from the actual fact of one’s guilt or innocence.

Perhaps a final concept can be added to the two types of guilt and the two types of innocence. All four of these are in some way in need of, and are relieved by, forgiveness.

Forgiveness from a human being, or a group of them, eliminates outer-directed shame. Forgiveness toward one’s self eliminates the inner-directed shame. Forgiveness from God eliminates ontological guilt, and embracing that forgiveness eliminates phenomenological guilt.

It is a lack of forgiveness which leaves the stories of Homer and Virgil with unsatisfying endings.

Dostoevsky’s plot comes to a happier conclusion. Raskolnikov must endure agonizing spiritual torment for several hundred pages before finally receiving forgiveness in the last two or three pages of Crime and Punishment.

Shame, guilt, and forgiveness form a powerful interpretive structure which can be applied to events real or fictitious.

Sunday, January 7, 2024

Reconstructing the Architecture of an Arabian Cathedral

Sometime around the year 567, an architectural exemplar was constructed in the region now known as Yemen, at the southwestern end of the Arabian peninsula. The governor Abraha caused it to be built. He claimed that “neither Arabians nor Persians have built anything equal to it.”

Abraha’s boast might have some merit: he’d corresponded with Emperor Justinain about the construction of this cathedral, and Justinian had contributed materials and skilled craftsmen to the project. Justinian wouldn’t have allowed himself to be associated with any building project that wasn’t spectacular.

This cathedral no longer exists, but historians Barbara Finster and Jürgen Schmidt have created a painstaking reconstruction of the church. Along with Justinian’s donation,

Daneben verwandte Abraha einheimisches Material, gurub-Steine für die Sockelzone, Steine vom Berg Nuqum, verschiedenfarbigen Marmor und vergoldete Ornamentfliese aus dem »Palast der Bilqis« in Marib.

Another historian, Werner Daum, estimates a slightly earlier date for the construction of the church, around AD 540. In any case, the cathedral lasted only until the mid 700s, when it was destroyed by invading Islamic armies. According to Daum, documents indicate that the church was still standing in 750/751, but was gone by 753/754, and so the demolition of the church is dated to around 752.

During the roughly 200 years that the church stood, it was a marvel, as Finster and Schmidt write:

Die Bekrönung der Außenwand erfolgte durch eine ebenfalls vorkragende Attikazone, aus vier (jeweils?) zwei Ellen hoch Zierbändern aus buntem, polierten Stein.

Finster and Schmidt refer to the fact that the joist ends overhung as did the meader at the top of the structure, an engineering feat.

The church built by Abraha in the city of San‘a’ was not only an architectural wonder, but also a reminder of a time when the Arabian peninsula was inhabited by a large number of Christians, who lived peacefully with anamists, Jews, and Zoroastrians. The landscape of Arabia was home to churches, cathedrals, and seminaries.

Starting in the mid 600s, Muslim armies conquered the peninsula, destroying the worship and educational places of the native populations. Numerous brilliant architectural examples were lost in this mass demolition.

Because of the linguistic ambiguities in transliteration, Abraha is also cited as Abreha, Abrahah, or Abrahah al-Ashram. San‘a’ is the city in which this edifice was located, a city also spelled Sanaa, San’a, or Sana.

Thursday, August 24, 2023

The One That Started It All: Shevoroshkin and Nostratics

Can the family tree of the world’s languages be traced back to a single ancestor? It’s no surprise that Dutch, Danish, and German are part of a single language group and share a common ancestor language. Romanian, Portuguese, and French descend from Latin. This understanding of language genealogy is common.

The work of historical linguistics over the last several centuries has yielded several detailed family trees, each of which shows the relationships between some of the world’s currently-existing languages. Dutch, Danish, and German arise from one earlier language, and together with Norwegian, Flemish, Icelandic, and others, form a sibling group: the parent language which gave birth to all of them no longer exists, but can be reconstructed.

The patterns found in the contemporary languages give clues about the structures of the parent language. Working backward, linguists can discover the features of the now-extinct parent. In the case of German, Swedish, Dutch, etc., the original source language is called “Germanic” (not German!) or sometimes “proto-Germanic.”

Likewise, a proto-Italic language gave birth to Latin; Latin in turn produced Spanish, Italian, etc. A proto-Slavic language yielded Polish, Russian, Czech, Slovak, etc.

Moving a generation further into the past, linguists then sought a common ancestor for the Germanic, Slavic, and Italic families. Those three — together with a few others, like Celtic, Hellenic, and Indo-Iranian — are all the progeny of an original “Indo-European” language. This language would have been spoken up to around 2500 B.C., and perhaps even written, although no written evidence has been discovered.

As research moves back generation by generation, the postulation of these various ancestor languages is based on ever more indirect evidence. It is tenuous to reconstruct a language for which no written traces exist, and which has not been spoken or used for centuries or even millennia. Data is derived from the common features of the existing daughter languages.

It is even more shaky to reconstruct a mother language when the evidence is a collection of hypothetical languages which are themselves reconstructions.

The move from examining the vocabulary and grammar of Gothic and Old Icelandic to reconstructing a proto-Germanic language is at least justified by concrete examples from the daughter languages. But the move to reconstruct Proto-Indo-European is even more fragile, because the daughter languages themselves are hypothetical reconstructions.

Yet linguists have done precisely this.

So there is now a detailed account of the proto-Indo-European language, its daughter languages, granddaughter languages, and great-granddaughter languages. As the name ‘Indo-European’ suggests, the family includes many of the languages which are now spoken in a broad area from India to Europe. Sanskrit and English turn out to be cousins.

This same multi-generational family tree structure has been documented for other language groups — an Afro-Asiatic group, a Dravidian group, a Kartvelian group, and others. Eventually, any existing language is seen as part of a larger linguistic family.

The final step — a speculative and controversial step — is to show all these groups, in their earliest phases, as siblings, and to posit a single language as the ancestor of all known human speech. The ultimate source is called ‘proto-World’ or ‘proto-Human’ or ‘proto-Sapiens’ and corresponds to an intuitive understanding of human language. It is conjectured to have been common during the Paleolithic era.

The ‘proto-World’ hypothesis engenders both love and hate among linguists — those who do research in historical linguistics are also called ‘philologists’ — because it has only the most uncertain indirect evidence, and yet also seems to be probably correct. The response of many linguists is that it is likely to be true and likely never to be proven.

One step in the process of developing the proto-World hypothesis was the discovery — or invention, depending on how you view it — of the Nostratic language group. This super-family of languages includes the Indo-European group, as well as the Uralic and Altaic groups, and potentially other groups. The Nostratic macrofamily does not, however, include all languages. It is an intermediate stage of development between proto-World and Indo-European.

Among linguists, there are two extreme views: on the one hand, a confident assertion that proto-World and Nostratic had a concrete existence, were part of a clearly definable family tree of languages, and are properly the object of rigorous academic study and attempted reconstruction; on the other hand, a rejection of proto-World and Nostratic as purely speculative and conjectural, lacking any observable evidence, and not properly the object of academic investigation.

Given those two extreme positions, fiery and entertaining debate is guaranteed. There are, of course, many moderate scholars who see proto-World and Nostratic as reasonable hypotheses, but as lacking enough detailed evidence to fuel rigorous reconstruction.

One linguist in the Nostratic camp is Vitaly Shevoroshkin; author Robert Wright gives a glimpse of Shevoroshkin’s work:

The assertion that proto-Nostratic actually existed, though sufficient to inflame a number of American linguists, is innocuous compared with the second part of Shevoroshkin’s world view: the Nostratic phylum is itself historically related to the handful of other great language families Shevoroshkin sees in the world, which means that all of them are descended from a common tongue. This language — called, variously, proto-Human, proto-World, and the Mother Tongue — would have been spoken 50,000, 100,000, maybe even 150,000 years ago, probably in Africa, and then diffused across the globe. And here’s the kicker, the thing that gives Shevoroshkin a rock-solid basis for his bunker mentality: he believes not only that proto-World’s past existence is apparent but that proto-World itself is apparent, its primordial elements distinctly visible in modern languages, as refracted through eons of linguistic evolution. He says we can now begin reconstructing proto-World, the basic vocabulary from which all the world’s known languages have sprung.

While detailed descriptions of how, e.g., Old High German emerged from proto-Germanic have been thoroughly explored, and accounts of them ossified in a couple centuries’ worth of academic work, the details of Nostratic and proto-World are murky and research is ongoing. This will provide employment and heated debate among scholars for another century or two.

Tuesday, March 7, 2023

Picking up the Pieces: What Can Be Learned from the French Revolution

Prior to 1789, the world had certainly seen its share of brutal and excessively bloody wars, in which many people had died. But the French Revolution was something different: it wasn’t a war; it was the mass execution of unarmed and defenseless civilians.

The French Revolution was also a tragedy. It began promising freedom of speech and ended by murdering people who had simply dared to express an opinion. It began promising freedom of religion and ended by killing priests and brutally enforcing atheism upon the populace.

What went wrong?

The revolutionaries correctly identified a political problem — the absolutist Bourbon dynasty — but they incorrectly sought to fix that problem by changing, not the government, but rather the society.

One fixes governmental problems by changing the government. One fixes societal problems by changing society. The French Revolution attempted to fix governmental problems by changing society, and so was doomed to failure.

Having toppled the monarchy, the revolutionaries discovered that they had no thought-out plan to replace it. What followed was a series of ad hoc governments, hastily structured, quickly manifesting their weaknesses, and as hastily replaced by other equally impromptu governments.

The ideology of the revolution — ideologies lack nuance and complexity — tended toward an absolutism of thought and often of action. The demand for absolute freedom led predictably to chaos. The chaos led predictably to a demand for order, which led to tyranny. The absolute nature of the revolutionary ideology — critics of the French Revolution called it “metaphysical” — also led to oversimplification. Blanket edicts replaced subtle legislation.

A century earlier, John Locke had already pointed to the distinction between government and society. A decade earlier, Thomas Paine had pointed to the same distinction with fresh wording for a new circumstance. To its own detriment, the French Revolution failed to heed this insight.

While both society and legislation can be frustratingly complex, the Byzantine intricacies of governments and cultures are the way they are for a reason. Patient examination will reveal wisdom in the labyrinthine structures. Both natural structures — oysters or stalactites — and human structures — great architecture — require time to be built, and in this time, acquire a multitude of details and variations.

Impatient revolutionaries wish to sweep aside the complexities at once, and introduce an absolute simplicity. Impracticality is the least of the vices of such thinking: it soon turns deadly.

The Jacobins were an influential faction among the revolutionaries in France. The Benthamites in Britain were followers of Jeremy Bentham, originally gathered around his system of philosophical utilitarian ethics, but then broadening those principles and hoping to apply them to society and government. Both groups, while differing from each other, made the same mistakes.

Edmond Burke made insightful critiques of the French Revolution as it unfolded and even predicted its various developments before they happened. Writing about Burke, historian Russell Kirk explains:

Society is immeasurably more than a political device. Knowing this, Burke endeavored to convince his generation of the immense complexity of existence, the “mysterious incorporation of the human race.” If society is treated as a simple contraption to be managed on mathematical lines — the Jacobins and the Benthamites and most other radicals so regarded it — then man will be degraded into something much less than a partner in the immortal contract which unites the dead, the living, and those yet unborn, the bond between God and man. Order in this world is contingent upon order above.

Burke saw that humans have three great relationships: to those who lived before them, to their contemporaries, and to those who will live after them. Any thoughts about the differences between government and society must account for all three.

The revolutionaries in France fell to one side or the other: either a legalism which found endless rules for people to obey, or an anarchistic nihilistic absence of any guidance.

The antidote offered by Burke was a morality which was not one of endless legislation, but which offered rather ethical guidance, as Russell Kirk reports:

There exists no simple set of formulas by which all the ills to which the flesh is heir may be swept away. Yet there do exist general principles of morals and of politics to which thinking men may turn.

The prudent thinker, Kirk writes, “is concerned with a number of” the same “primary questions” as the French Revolutionaries were, “and with a vaster array of prudential questions, to which the answers must vary with the circumstances and the time.” Contrary to the romanticized image of revolutionaries, the leaders in France from 1789 to 1799 were remarkably doctrinaire and rigid, unwilling or unable to allow for variances in legislation or morality, seeking simplistic universal rules to be applied mechanically to all people at all times in all places.

A century later, Burke’s thought would be echoed by Jacob Burckhardt, who referred to a class of thinkers, including those of the French Revolution, as the “terrible simplifiers.” Rather than having an ideology, Burke’s mission was to show that ideologies are doomed to fail, and in the process of their failure, bring misery.

Wednesday, January 25, 2023

John Locke Discovers Justice and Tolerance: The Integrated Aspects of Lockean Thought

The decades before Locke’s adulthood were grim: the Gunpowder Plot, the execution or assassination (depending on one’s view) of Charles I, the English Civil War, the dictatorship of Oliver Cromwell, and the Thirty Years’ War, among other painful events. The philosopher Thomas Hobbes lived through these years, and his sinister assessment of human nature is perhaps colored by them.

By 1660, these episodes were over, and Locke would have been 28 years old.

For most of his adult life, Locke lived in an era which was still mixed, but perhaps slightly better. Locke was suspected of being involved in the Rye House Plot, a failed attempt to assassinate Charles II; Locke had to flee to the Netherlands in 1683.

But on a happier note, Locke also witnessed the Glorious Revolution, so called because it was accomplished without violence or casualties. The concepts of parliamentary democracy and constitutional monarchy were developing during Locke’s life — indeed, he was involved in the development himself — and both of those concepts had an optimistic aspect.

Locke suspected that much of the world’s misery and violence had its roots in a distorted version of Christianity, as historian Joseph Loconte writes:

Conspiracy theories, nativism, militant religion, mob violence, a plot to topple the government — welcome to 17th-century England. English philosopher John Locke (1632–1704) lived through one of the most turbulent and divisive periods in British history. At the center of the storm, he believed, was a degraded form of Christianity. “All those flames that have made such havoc and desolation in Europe, and have not been quenched but with the blood of so many millions,” he wrote, “have been at first kindled with coals from the altar.”

By contrast, Locke saw uplifting events in history as based on an authentic and accurate understanding of Christianity. He studied the New Testament carefully, and saw in it a source of tolerance and justice, as Joseph Loconte explains:

Locke’s response was to offer the world a new political vision: a liberal society based on a radical reinterpretation of the teachings of Jesus. The bracing message, contained in his Two Treatises of Government and A Letter Concerning Toleration, was about the liberty and dignity of every human soul. It would form the bedrock of the American political order.

Behind Locke’s embrace of majority rule is founded on the notion of imago dei — the idea that each human being carries an imprint of the supreme intelligence which oversees the universe. The principle of majority rule is attractive only if there’s some reason to believe that the majority of people will produce the right answer in the majority of cases. If each human being bears the imago dei, then it increases the chances that the majority of a group has likely chosen the correct course of action.

A similar logic provides a foundation for the concept of popular sovereignty.

Additionally, Locke sees God as valuing each human being. If God attributes worth to every human, then no human can deem another human to be worthless. The notion that God values people arises in part from the notion that He owns all people, having created them.

For Locke, the first order of business was to insist that every human being bore the imprint of an intelligent, personal, and purposeful Creator. No one must be treated as a means to an end, as someone else’s property. Rather, every individual belonged to God, and was designed for a noble and transcendent purpose. As Locke declared in his Second Treatise: “For men being all the workmanship of one omnipotent, and infinitely wise Maker; all the servants of one Sovereign Maker, sent into the world by his order and about his business, they are his property, whose workmanship they are, made to last during his, not one another’s pleasure.”

The work ‘workmanship’ alerts the reader to a New Testament passage:

Most of Locke’s audience, literate in the Bible, would have recognized his allusion to a passage from Paul’s Letter to the Ephesians: “For we are God’s workmanship, created in Christ Jesus for good works, which God prepared beforehand, that we should walk in them” (Eph. 2:10).

Long before John Locke, authors such as Richard Hooker used the phrase “divine right” to describe royal authority in his book of the Lawes of Ecclesiasticall Politie published in the 1590s.

Against the notion of a “divine right,” Locke posed “popular sovereignty,” a concept which would appear often in American civilization after Locke’s death.

The political and religious leaders of Locke’s day invoked the “divine right” of kings to justify political absolutism. In contrast, Locke argued that only a government based on the consent of the governed could honor the divine prerogative. Here, in brief, is his religious argument for the right to revolution: If the political authority tries to oppress God’s servants — by threatening their life, liberty, or property — it becomes morally illegitimate.

There is a thoroughly Lockean sentiment in these words: The purpose of a government is to protect the lives, liberties, and properties of its citizens. Anything less is negligence; anything more is tyranny.

Locke’s search for the “why” of government — what is the purpose of government? — shifted the emphasis of the debate. The apologists for absolute, or near absolute, monarchy were busy explaining that the dynasty rightly held all the power. Locke is asking, not who holds the power, but for which purpose is the power held?

Once the purpose of the sovereign is clear, then it is also clear that imposing an arbitrary religion on the subjects does not serve that purpose. This is a reflection on the struggles between the Anglicans and the Roman Catholics in the British Isles: since the early 1530s, this had been a matter for the Crown and the Parliament to ponder.

But Locke is suggesting that neither the Crown nor the Parliament needs to busy itself with choosing between Anglicanism and Roman Catholicism. The task of protecting life, liberty, and property is indifferent to the question of religion.

In the century before Locke’s birth, the Protestant Reformation had opened the doors to religious pluralism throughout Europe. For his political theory to work in societies with competing religious traditions, another sea change in thinking was required. The state must stop punishing people for refusing to endorse the preferred national religion. Likewise, church leaders must stop trying to manipulate the levers of power to impose their sectarian values on an unwilling population. “No peace and security, no, not so much as common friendship, can ever be established or preserved amongst men,” Locke wrote in A Letter Concerning Toleration, “so long as this opinion prevails . . . that religion is to be propagated by force of arms.”

The Lockean notion of tolerance was directed to the practitioner of a faith, not to the faith itself. Given this nuance, Locke was free to argue for the ‘toleration of the dissenters’ without committing himself to the truth of their faith; likewise, tolerance for the Jews or the Muslims, to the extent that they practice their faith without damaging the lives, liberties, or properties of citizens.

Spanning two centuries, a number of conflicts had been named ‘wars of religion’ — the French Wars of Religion from 1562 to 1598, the Thirty Years’ War from 1618 to 1648, and the fighting in Ireland. Locke saw that these wars were not truly about religion, but simply about political power. He also saw that the results of these wars left governments in place which failed to serve the purposes for which any and all governments are instituted.

Locke became an eloquent and modern champion of tolerance, as Joseph Loconte reports:

In the wake of the wars of religion, it was assumed that the best guarantee of civic peace and social cohesion was the rigid imposition of a national creed — Catholicism in France, Calvinism in Geneva, Anglicanism in England, and so on. Yet laws criminalizing dissent created a permanent underclass across Europe. Locke watched in anguish as religious dissenters were persecuted, jailed, executed, or sent into exile.

Locke never set foot in America, but his formulation of religious tolerance both shaped and expresses an American point of view:

Locke decided to turn conventional thinking on its head. The key to political stability was not conformity through coercion. Rather, a government that protected — with equal justice — the rights and freedoms of people of all faith traditions would enjoy widespread support. Presbyterians, Anabaptists, Catholics, Quakers, Jews, even Muslims — no one, according to Locke, should be denied his essential civil rights because of religion. “The sum of all we drive at is that every man enjoy the same rights that are granted to others.” (Locke is routinely accused of having denied toleration to Catholics, but a close and contextual reading of his Letter strongly suggests otherwise.) Here is a revolutionary application of the Golden Rule, one of the pillars of Christian morality.

The creation of the United States leaned on many thinkers: Montesquieu, Burlamaqui, Cicero, and others — but perhaps on Locke most of all. Yet Locke’s influence on the U.S. can be misunderstood. He advocated for a truly revolutionary level of religious tolerance; yet he understood that act of tolerance itself as a Christian act.

Locke might say that extending tolerance to a Jew or to a Muslim is itself a distinctively Christian act.

The American Founders, supported by the nation’s clergy, thoroughly absorbed Locke’s political principles — from the separation of powers to the separation of church and state. Nevertheless, many on today’s religious right reject Lockean liberalism for supposedly opening the door to relativism and “radical individualism.” Progressives, on the other hand, applaud his commitment to individual rights but rip it from its religious foundation.

An examination of Locke’s writings reveals the detail and seriousness of his study of theology, and the depth to which his theological studies shaped both his political and his philosophical texts.

In fact, Locke’s political outlook was saturated with Christian assumptions about justice, equality, freedom, and natural rights. Like no thinker before him, Locke combined Whig political theory with a gospel of divine grace and mercy.

God’s omnibenevolence was, for Locke, a foundation for popular sovereignty, majority rule, and the idea that each and every human life has value and is therefore to be protected.

A lifelong student of the Bible, he searched the scriptures for examples of God’s indiscriminate love. He founded a philosophical society that required prospective members to affirm their love for “all men, of what profession or religion soever.” For many years he kept notes from a sermon based on Galatians 5:6, “the only thing that counts is faith expressing itself through love” — a theme that appears throughout his writings and personal letters.

The text of the New Testament was, for Locke, a roadmap to the goal of ensuring that every human life is valued and protected. It is debatable whether Jesus is moderately pacifistic or radically pacifistic, but in either case, Locke saw in Jesus the principle of protecting lives, liberties, and properties.

The life and teachings of Jesus were his lodestar. Jesus never bullied people into becoming his followers, never used force, and scolded his disciples when they were inclined to do so. Christ came into the world, Locke believed, to bring life and peace to those who had become God’s enemies. To Locke, Jesus was “our Savior,” “our Lord and King,” “the Captain of our Salvation,” and “the Prince of Peace.”

By 1695, Locke had expressed his religious views in detail. He was solidly Anglican; he devoted care to the precise study of Scripture, and he did all of this in a way which nonetheless won him admiration from Deists and Unitarians.

Tolerance was, for Locke, not an option but a duty:

Thus, the moral economy of the Christian story must inform the ideals of a political society: The gospel offered a path by which people could live together with their deepest differences. “The toleration of those that differ from others in matters of religion, is so agreeable to the Gospel of Jesus Christ, and the genuine reason of mankind,” Locke wrote, “that it seems monstrous for men to be so blind, as not to perceive the necessity and advantage of it in so clear a light.”

Locke’s texts may be sorted into three main groups: political, religious, and philosophical. In addition, there are other writings about the natural sciences or about economics. Yet Locke himself might have argued that such distinctions are not justified. He might have seen his political, religious, and philosophical writings as related and interconnected. The legal concept of a jury trial might be based in part on his empirical epistemology and in part on the concept of each human possessing the imago dei.

Monday, August 22, 2022

The Three Phases of the Roman Empire and Their Corresponding Military Strategies

The Roman Empire began around 27 BC, replacing the Roman Republic. During the five centuries of its existence, the empire would expand significantly. While expansion is success, it also creates a need for military strength — and that need is continuously changing.

Historian Edward Luttwak describes the geopolitical history of the empire in three phases, in order: the growth, maintenance, and defense of the empire:

The first expansive, hegemonic, and reliant chiefly on diplomatic coercion; the second meant to provide security even in the most exposed border areas, in part by means of fortified lines whose remains are still visible from Britain to Mesopotamia; and the third a defense-in-depth of layered frontier, regional, and central reserve forces that kept the western empire going till the fifth century and the eastern empire for much longer.

For roughly the first century of its existence, the military policy of the empire was expansion. The empire was birthed in the midst of the collapsing republic. One of several reasons for the end of the republic was that it was a form of government suited to ruling the city of Rome and the surrounding territories in central Italy, but it was not suited to ruling a world empire. Although the empire would ultimately rule more territory than the republic, the republic, during its last years, had control of millions of square miles, including the areas of present-day Spain, France, Greece, Turkey, and North Africa.

Maintaining effective military control of occupied regions, and mounting campaigns to gain control of still more land, was a task which required quick and consistent decision-making and a clear chain of command. The Senate, which was the main governing body during the republic, was a deliberative institution which engaged in debate. Military commanders in the field, sending requests for instructions to Rome, couldn’t wait for the Senate’s thorough discussions.

“The first system of imperial security was essentially that of the late republic, though it continued” until around 68 AD, when Nero died. This first system operated “under that peculiar form of autocracy we know as the principate.” The principate was a legal fiction created by Octavian, designed to retain the appearance of a republic while concealing the fact that the actual machinery of government was functioning as a dictatorship.

The empire was born in war, and kept a martial flavor for most of its existence, as Edward Luttwak explains:

Created by the party of Octavian, himself a master of constitutional ambiguity, the principate was republican in form but autocratic in content. The magistracies were filled as before to supervise public life, and the Senate sat as before, seemingly in charge of city and empire. But real control was now in the hands of the family and personal associates of Octavian, a kinsman and heir of Julius Caesar and the ultimate victor of the civil war that had begun with Caesar’s murder.

That war had been an internal power struggle between the three members of the Second Triumvirate: Octavian, Mark Antony, and Lepidus. In 36 BC, Lepidus was defeated by Octavian and sent into exile, where he lived safely and died of natural causes around 13 BC or 12 BC. That left Octavian and Mark Antony to face each other. Mark Antony had Cleopatra as an ally, but the war ended in 31 BC “with the final defeat of Antony and Cleopatra.”

Octavian now had all the power which was originally divided among three men: the separation of powers was destroyed, the Senate largely powerless, and Octavian, becoming emperor, chose a new name: Augustus.

Octavian-Augustus was succeeded by Tiberius, then Caligula, and later Claudius. At the death of Claudius, Nero became emperor.

Nero’s tyranny and excesses provoked rebellion. In part because Nero failed to clearly identify a successor, and in part because any successor named by him would likely not have been acknowledged, there was an open question about who would be the next emperor.

“When Nero died,” writes Luttwak, it was under mysterious circumstances. He had been obliged to flee the city of Rome because of uprisings against his tyrannical rule. Some historians write that he committed suicide; other write that the governor of a Greek island discovered him living in disguise and ordered that he be executed. Most of the evidence points toward suicide. In any case, when Nero died in 68 AD, “another had already claimed his place.”

On June 8, 68 AD, the Senate, seeing that Nero was unlikely to retain power in the face of the rebellion, named Galba to be emperor. Most evidence points toward Nero dying by suicide on June 9 in Rome. Galba, however, had been in Spain when the Senate named him emperor, and was not yet in Rome.

But the new emperor, Galba, did not arrive in Rome until October and did not live beyond January 69. M. Salvius Otho, an ex-governor of Lusitania, though in Rome as Galba’s follower, procured his murder at the hands of the Praetorians and was acclaimed emperor in turn. By then yet another had risen to claim the office, Aldus Vitellius, governor of Lower Germany and master of its four legions. So far, contention had been resolved through suicide and murder; now there was to be civil war also.

The lower levels of the Roman bureaucracy trundled along, keeping the empire running, while at the highest levels, open warfare among would-be emperors manifested itself in a string of assassinations. After Nero’s death, Rome would have four different emperors within a year.

The military, especially those units on the frontiers, had no steady source of instructions. It engaged largely in holding actions during this time. The era of grand expansion was over.

The outstanding virtue of the principate, the constitutional device invented by Augustus, was its reconciliation of republican forms and traditions with autocratic efficiency. Its outstanding defect was that the succession was dynastic, but without any mechanism to secure it as such, or to replace an unfit dynast. When a tolerable emperor chose a capable successor and made him a son by adoption, all was well. Adoption satisfied the dynastic sentiments of the army and the common people without offending the anti-dynastic prejudice of the Senate. But if there were no adequate son and none were adopted, he became emperor who could make himself emperor; usually by force.

During the next two centuries, succession was not always as chaotic and violent as it was after the death of Nero. But the flaws in Augustus’s precedent had been visible, and succession remained a point of contention for the remainder of the empire.

Although there would be occasional bursts of expansion, the empire energy and focus could not be as devoted to conquest as it had been earlier, because some amount of attention was continuously absorbed by the matter of succession.

Thursday, August 4, 2022

Babylon: Military, Political, and Cultural Grandeur Come Eventually to Naught

In the Ancient Near East, Babylon had a central role: it was both conqueror and conquered. It lay at a crucial location on the Euphrates River, where the river had been divided into a number of trenches and canals which flowed through and around the city both to provide water, mainly for agricultural irrigation, and to provide defense in a system of moats.

The Tigris River was not far from Babylon, and together the Tigris and Euphrates gave the region both moisture for crops as well as transportation in the forms of ships and barges which moved up and down the waterways.

The earliest written documentation of the city seems to be from around 2250 B.C., and the cities importance waxed and waned: during its highpoints, the city was the political center of the Babylonian Empire; during its low points, it was a vassal to other empires.

The word ‘Babylonia’ refers to the larger region controlled by the city of Babylon, i.e., the Babylonian Empire.

During one of its last glory days, Babylonia conquered Israel and brought most of that nation’s citizens to Babylon as slaves: this would have happened approximately during the time from 598 B.C. to 586 B.C.

The military defenses of the city were formidable. The captured slaves would have been impressed, and probably depressed, by the structures of the city, as described by historians Joachim Marzahn & Klaudia Englund:

Babylon was situated on both sides of the Euphrates, the old town to the east, another half of the town to the west of the river. It was protected by a double ring of walls, the inner wall being some 6.5, and the outer wall 3.5 meters thick. At distances of 17-18 meters, towers of, respectively, 11 and 4.5 meters width formed part of the defenses. At least 8 double gateways stretching 50 meters afforded entrance to the city. The old town alone comprised an area of ca. 2¼ km2, a large part of which was occupied by palaces and temples. Further protection was offered by the eastern wall spanning some 8 km, which also protected buildings beyond the inner city, for example, the Summer Palace in the north.

This system of walls and gates was not only a military defense system, but rather also an impressive work of art and architecture. The famous Ishtar Gates featured bricks with a glossy blue glazed finish. Accents of gold or yellow were mainly in the form of lions.

Beneath the walls and gates, the foundations extended downward even further than the walls above ground extended skyward: This was to prevent an attacking army from tunneling under the walls.

The physical strength and architectural grandeur of the walls of Babylon was paired with the intellectual resources of the city, which was a center of both literary and scientific work.

In 539 B.C., Cyrus of Persia conquered the city with little actual fighting: the city seems to have surrendered quickly. Babylon was incorporated into the Persian Empire, but retained some importance as a regional administrative and cultural center.

Cyrus died around 529 B.C., and various kings succeeded him as rulers of Persia. One of those later kings was Xerxes.

In 482 B.C., the city revolted against Xerxes. He put down the rebellion and, as historian Henry W.F. Saggs writes, the uprising

led to destruction of its fortifications and temples and to the melting down of the golden image of Marduk.

(Marduk was a major figure in the Babylonian polytheistic system.)

Although the city remained in existence for several centuries after this, it was never again a major player in the politics of the Ancient Near East.

Tuesday, July 26, 2022

The Ethics of Capitalism: The Moral Side of the Free Market

Arguments against capitalism often take a moral tone, and accuse capitalism of deifying avarice: of making greed into a virtue. Who wants to defend an economic system which amounts to codified selfishness?

Such attacks are misguided on multiple fronts.

A polemic against capitalism, if it does not further specify what it is condemning, is fundamentally ill-founded, because the word ‘capitalism’ possesses multiple distinct, different, and mutually exclusive definitions. On the one hard, there is ‘crony capitalism,’ in which a small number of companies develop relationships with the government, in the process inducing the government to so regulate trade and commerce that only these companies have the advantage, and that the majority of companies are burdened by regulation and unable to effectively compete: such “crony capitalism” is opposed to any intuitive sense of fairness.

On the other hand, there is “free market capitalism,” which seeks precisely to correspond to this intuitive sense of fairness: the job of the government is to be a neutral referee or umpire, allowing companies to compete fairly by offering various products at various prices.

In addition to those two types of capitalism, one can even make the counterintuitive argument that Marxist communism or socialism is a type of capital: it argues that the “means of production” should be jointly owned by all the people, in the form of government ownership — and the means of production is nothing else than capital. Marxism, in the form of socialism or communism, can be characterized as “state capitalism.”

If one is to find fault with capitalism, then one must specify which type of capitalism one is vilifying.

Crony capitalism is easy game: it is almost universally seen as unfair, to the extent that even those who engage in it take pains to hide their activity. Beyond that, crony capitalism is of no utility to the citizens: the market is provided with goods of low quality at high prices, and shortages are frequent. Consumers have fewer options and choices in such a system.

State capitalism is sometimes advocated by people of good will, as a route to social justice. Other times, it is cynically implemented by those who see it as an opportunity for personal power. In either case, it leaves the citizens with fewer choices, higher prices, less quality, and shortages.

A free market system, on the other hand, can achieve the social justice which state capitalism claimed to seek. As economist Ludwig Erhard phrased it:

When I speak of a social market economy, I do not mean that the market needs to be made social. I mean that the market is intrinsically social.

A free market corresponds to an intuitive notion of social justice: a market economy is socially conscious because, as suppliers seek to sell to every demand, then every consumer benefits from the competition to offer better products at a lower price. Everyone is a consumer, and everyone is part of the economy’s aggregate demand. In a competitive market economy, every supplier wants to meet as many demands as possible, and suppliers will lower their prices and raise quality in order to sell to those demands.

Because suppliers want to sell to as many consumers as possible, they find ways to offer products at lower prices. Because suppliers compete with each other, they find ways to offer better quality at those lower prices. As consumers are able to obtain better products at lower prices, the goals of social justice are attained.

Ludwig Erhard showed that market economies lead to social justice, and that property rights lead to civil rights. He showed that when suppliers are free to creatively meet demands and consumers are free to choose, all parties in the marketplace benefit.

As a brilliant byproduct, free markets also give consumers more choice and more abundance.

Efforts to censure capitalism as immoral are effective against crony capitalism and against state capitalism, but gain no ground against free market capitalism, as Russ Roberts writes:

A lot of people reject capitalism because they see the market process at the heart of capitalism — the decentralized, bottom-up interactions between buyers and sellers that determine prices and quantities — as fundamentally immoral. After all, say the critics, capitalism unleashes the worst of our possible motivations, and it gets things done by appealing to greed and self interest rather than to something nobler: caring for others, say. Or love.

It is plausible to make moral arguments against greed. But in a free market, the retailer is working simply to feed his family: this is not greed, it is responsibility. How he would be chastened if he did not attempt to provide for his family!

Buyer and seller function as a couple dancing: each plays a part. Buyer and seller function as teammates in a sport: one would hardly accuse the player who caught a teammate’s pass as greedy because he received the ball.

In any discussion of the various forms of capitalism, it is good form to quote from the writings of Adam Smith, whose The Wealth of Nations appeared in 1776 and became a foundational text in economics:

Man has almost constant occasion for the help of his brethren, and it is in vain for him to expect it from their benevolence only. He will be more likely to prevail if he can interest their self-love in his favor, and show them that it is for their own advantage to do for him what he requires of them. Whoever offers to another a bargain of any kind, proposes to do this. Give me that which I want, and you shall have this which you want, is the meaning of every such offer; and it is in this manner that we obtain from one another the far greater part of those good offices which we stand in need of. It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages.

If capitalism were merely greed cloaked in the equations of economists, it would be morally suspect, as Russ Roberts recites:

Capitalism, say its critics, encourages grasping, exploitation, and materialism. As Wordsworth put it: “Getting and spending, we lay waste our powers.” In this view, capitalism degrades our best selves by encouraging us to compete, to get ahead, to win in business, to have a nicer car and house than our neighbors, and to always look for higher pro.ts and advantages. In the great rat race of the workplace, we all turn into rats. Is it any wonder so many want to kill off capitalism and replace it with something more just, more fair, more humane?

But he goes on to point out that only in a free market system does the retailer have a motive to treat his customers honestly and fairly. Only in a free market does the employer have a reason to pay his workers well. The freedom of the marketplace means that the consumers can go elsewhere if they perceive that they’ve been given a bad deal. Workers can go elsewhere if they feel unappreciated.

This is why people who live in countries with even a partially free market inevitably have a better standard of living than those who live in “command economies” or “planned economies.”

The political freedom and personal liberty which accompany a free market are pleasant byproducts of the laissez faire economic system.

Endless tirades have been made — and will be made — against “capitalism,” but when those attacks are carefully analyzed, they fail to undermine the results obtained by a free market system. Those who benefit the most from a free market are not the wealthy capitalists — for such people will enjoy a high standard of living in any economic system — but rather the working-class people, who obtain a level of prosperity not available to the working class in nations governed by other economic doctrines.