Gallery

Considering the Hornets Nest

Popular myth in Charlotte holds that after being driven from the area in the autumn of 1780, General Charles Cornwallis remarked that Mecklenburg County was a “hornets nest” that could not be held by force. There is a great deal of local pride in this anecdote, and several organizations and landmarks have derived their names in whole or in part therefrom including parks and landmarks, a local Girl Scouts council, an elementary school, and the city’s NBA team (The Hornets). In addittion, a representation of a hornets nest is one of the official symbols of the city, and the Charlotte Police display a hornets nest on their badges.

The anecdote seems ubiquitous, but its foundations in historical fact are questionable. There are plenty of secondary sources dating back to the mid-19th century citing the “hornets nest” story in one form or another, but many of them disagree on the particulars. The modern version, that Cornwallis said it after losing control of the region, is joined by versions where Cornwallis’ shock collar Banistre “Bloody” Tarleton uttered the words after his soldiers lost to patriot forces at the Battle of McIntyre’s Farm, and even where the “hornets nest” has to do with Lincoln County in Georgia and not Charlotte Towne at all.

How can you tell the difference between historic fact and historic fiction in a case like this? A recently published six volume collection of letters and other writings by and to Cornwallis may hold insight into the accuracy of the tale. Yet even if no direct evidence that he ever wrote the words “hornets nest” can be found within the hundreds of his correspondences linked directly to the time he spent on campaign in the Southern Theatre, the absence of such evididence does not itself indicate evidence of absence. There might be evidence hidden away in the collected writings of Tarelton, or mentioned in one of the few remaing (poorly) preserved newspapers of the era. Or the mysterious primary source may just be lost to the decay of time.

Regardless, the story of how the patriots of Mecklenburg County, North Carolina bested the British Army has left an unquestionable impact on the people who have since lived in Charlotte. The “hornets nest” tale is part of the local culture’s oral history without a doubt. Just don’t ask about the Mecklenburg Declaration of Independence.

Advertisements
Gallery

Resources & Relationships

Conventional wisdom is that modern societies, at least, can be divided into three such spheres – political, economic, and social orders – and that ideally the three should be supported within the everyday lives of individuals with different rules, norms, and the like. In this paradigm, the normal relations of friendships, enmity, family, etc… appear to carry expectations of communalism and reliance upon precedent to decide if a person should be accepted or rejected in friendship, hated, or ignored. The schedule of reasoning is variable between sub-groupings within the population, but peer pressure almost always plays its part. Economics, on the other hand, emphasize that the things transacted are important, and not the “from whence” or “whom” they come. The only thing that is supposed to matter is the financial or material advantage of the individual decision-makers. And politics are supposed to address the subversion of violence, restricting all of human behavior through control of how and when violence can be employed. In this sphere, influence over the mechanism of control is of paramount importance with aspects of social and economically described relations seen as suspect.

These three spheres can be seen as somewhat similar to the anthropologists’ three realms of communism, exchange, and hierarchy. To a degree, these follow much the same form. Social relationships behave much like the communistic ideal of sharing, once a person is a friend, spouse, or family member, there is an expectation that people will help those so designated along the lines of “from each according to their ability, to each according to their needs.” Likewise, economic relationships can be related to exchange, and political relationships to hierarchy. The associations are not perfect, however, as social, economic, and political relationships can each have aspects following the forms of communism, exchange, and hierarchy. Why is this correlation skewed? Is it that one or both conceptual models are intrinsically flawed?

The disjuncture has been caused by the changes in relations between people through history. A significant part of this is hypothesized with help from the field of anthropology – the steps in development which came before the advent of written history. The first was the hunger-gatherer stage. Without really anything by way of records regarding this period beyond the stunning artwork left behind in some places, the details of societies from this time can’t really be known with any certainty. The vague outlines seem to be that population density was low, populations were small, and organized violence was unlikely to have been common. In such an environment, relationships of hierarchy were probably rare, to ensure the acquisition of resources suggests that what formalized relations which occurred could have as likely been consequential of communism as of hierarchy.

Relations of any of the three axises – communism, hierarchy, and exchange – belong more to a spectrum than the discrete separateness of “spheres.” When individuals become enamored with their communistic designation or leadership, the relationships which otherwise maintained an egalitarian equivalence can easily become similar to the inequality of hierarchy. So also can the mode of the principles of exchange, when for whatever reason the process is prevented from being completed and the relationship between parties is stuck in perpetual inequality – also developing the trappings of hierarchy. This might tend to suggest that hierarchy can only exist causally subsequent to either communism or exchange, but evidence from studies of other great ape societies shows that hierarchy is present far more frequently than communism, and exchange is almost entirely absent in the findings of such studies.

It would be easy to assign a mythical standard to the forms of human interaction – that originally all people lived within egalitarian communist societies, developing into hierarchy at an unknown point when some individuals chose to enforce their desires on others by means of force, and developing exchange only where imposed from above within an hierarchical organization as a means of facilitating the flow or resources from hierarchically subservient populations to the ends assigned by those holding power. Karl Marx and subsequent generations of communist and socialist thinkers have claimed as much. But I believe that the more complete explanation is significantly more complex.

The more reasonable explanation is that all three types of interaction coexisted in prehistoric societies, the frequency of each being subject to variables such as degree of trust between individuals, availability of resources, and the duration and frequency of exposure to and interaction with individuals alien to the symbolic “us.” When trust is high and resources are ready at hand, interactions tend to be communistic. When trust is high and exposure to alien individuals is frequent, those interactions might tend more strongly towards exchange. And when trust is low and/or resources are scarce, hierarchy is more likely. These are trends, and should not be treated as absolute claims, for each interpersonal interaction is carried out by human choices and only informed by – not decided by – predispositions. There is, indeed, a relationship between the familiarity of interacting individuals, the availability of resources, and the level of trust such individuals might be predisposed to feel for one-another. Again, though, these would be descriptive predispositions accounting for only a part of those informing the choices of participant individuals.

Perhaps the most significant distinction between the hypothetical prehistoric and subsequent historic societies from those of other great apes is the presence and variability of trust. Herein, that faith in the reliability of the actions of others, facilitated by means of the increased precision of the communication of concepts and intentions which go hand-in-hand with verbal or written language. Through language, it becomes possible for individuals to precisely describe the motivations behind actions and partially lift the veil of separateness inherent to individuality. It may even be possible to draw a general assessment of human interactions in general – the more information one person has about another, the more familiarity is present, and there is at least a positive correlative relationship between familiarity and trust. This is not to say that there cannot be familiarity and trust without language, but instead that such is more easily and more completely possible with than without.

But what does any of this have to do with an explanation of the origins of the types of social relationship in prehistoric and early historic societies? Well, if verbal language facilitated an increase in trust between individuals, this was necessarily moderated by the general unreliability of humans in the accuracy of information exchanged over time – either through intentional deception or the natural imperfection of memory recall (see the telephone game and studies on witness reliability in court testimony). With the advent of written records, the accuracy of recall increased. Consider that while a person can be swayed by argument, violence, or time to recall past events differently from their original occurrence, once something is put down in writing, so long as it is not actively altered, the words will not change no matter who the reader is, or the circumstance of the reading. There is not a perfect security of the information, but the presence of written words – or even of other physical symbolism – creates the opportunity for more reliable exchange of information over time and across distance.

This can be seen in the use of symbolon in pre- and early-historic eras in Greece, as well as fu in ancient Chinese cultures. In both cases, the symbolon or fu were physical objects representative of a promise between individuals. Whereas a purely verbal agreement may be denied by one or more parties, forgotten, or ignored, the presence of a physical symbol representing the pact decreases the likelihood of these by most relying wholly on the memories of those witness to the actual event. For these to exist suggests the presence of norms of behavior between persons within a social population – norms which promoted trust between individuals by means of holding honesty and adherence to agreements in esteem – but which have significant risk of being violated for convenience or lack of agreement regarding adherence.

It isn’t too great a supposition, then, to postulate that before the advent and spread of written language – the advent of history in the proper sense – societies were relatively egalitarian, with the intermix of the designated spheres of relationships varying but relying predominantly on the trust which is the foundation of (local) communism and (alien) exchange.

Maybe communism and exchange aren’t so much wholly separate, as the academic literature heretofore has suggested, but rather are two sides of the same coin. In this, I would propose that the key difference between the two is that communist relations involve the expectation that gifted or exchanged resources need not be immediately reciprocated – a consequence of the trust and social familiarity leading people to accept that in the future resources will come back in kind eventually – and the exchange between strangers, which assumes that the return reciprocation cannot wait because the interaction is temporally limited and thus will end. The anticipated duration of the relationship is ultimately what determines the difference between baseline communism and exchange. Hierarchy, then, could be described as a social structure (at least at the baseline), which controls the flow of resources initially as a consequence of limited access to resources.

Somewhat ironically on the historic scale, the very factor increasing trust – writing – was likely a consequence of the leadership of a baseline hierarchy attempting to secure its authority. Doubly ironic because it was probably conquest at the hands of nomadic peoples whom had rejected that baseline authority and exodused from the population centers which promoted the use of writing beyond its original purpose of recording economic holdings and transactions and transformed it into a method of laying out indisputable laws explicitly restricting the behavior and relationships of subject persons.

In such a general sense, the written record appears to support this theory, as the earliest known Mesopotamian tablets recorded by ruling temple complexes for the purpose of accounting for the accumulation and distribution of resources. It was only later that kingship came into the record, apparently modeled after the preceding temple systems, and using writing to preserve records of edicts and laws handed down through the hierarchy. While the Indus Valley civilization did not leave sufficient records to adjudge, the later Ganges culture did leave writings describing ideals of human behavior. And early Chinese writings similarly attempted to describe why people should be subject to hierarchical authority.

From the perspective of modern literate society, in which the distinction between fact and fallacy (or nonfiction and fiction) is closely guarded and enforced, it is not difficult to lose sight of how and why early writing was so significant. But when the natural expectation among a population is that knowledge can only exist within a mind, and that people is exposed to a technology which preserves knowledge outside of a mind – capable of recall by any who have the skill of deciphering it – the philosophical significance will not be lost on the members of that society, even if, perhaps, it is not explicitly or accurately described at the time.

Gallery

First Principles in the Declaration of Independence

The concept of personal rights being separate from the power exerted by government was by no means a creation of the late 18th century, however the single phrase most often cited as the underlying basis for liberal political philosophy, “all men are created equal”, comes directly from the United States Declaration of Independence, penned in the month leading up to the 2nd of July, 1776. The authors of this renowned piece of history, Jefferson, et al., have alternately been placed upon pedestals and demonized for the details of their lives and their roles in creating not only a nation, but the enduring framework for a grand experiment in democratic government.

Unfortunately, many people today blindly follow political leaders who, understanding the necessary element of faith involved in accepting such first principle arguments as appear to be made in the Declaration and Revolution era works, take advantage of that faith and assert without regard for honest discourse that their claims are derived from the same source as the core doctrine of American political philosophy rather than from mere lust for power over others. Not only are these demagogues misleading through their false claims of equivalency, they also fail in their analysis of the core arguments which they attest to defend. Even the most cursory examination of the original work within an historical and philosophical context reveals that Jefferson and all the men from the Second Continental Congress who signed the Declaration were fallible, political, and relatively ignorant. That being the case, they were also responsible for a set of some very good ideas, many of which have subsequently proven not only to be inspirational, but defensible by modern evidence.

A great deal of the Declaration is written as a list of charges against the British King George III and his Parliament, but the heart of the document is a single sentence with four clauses: “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness.” Each of the four clauses of this sentence should be examined if the full meaning of the sentence is to be understood.

 

The first clause has three keywords, “we”, “truths”, and “self-evident”. Originally, “we” referred only to the undersigned, those who put their names to a document that would be considered treasonous if the war for independence from Britain was lost. That is not to say that a more universal consensus was not desired, rather that the Congress was in dire need of legitimacy during the Revolution, and sought to gain it both within colonial America and abroad by showing Americans that the representatives attending the Congress were willing to put their lives and reputations on the line, while simultaneously showing the British (and potential European allies such as the French) a nation unified in its resistance to foreign rule. It is very easy for Americans today to think that because things turned out the way they did, that Americans during the Revolution knew or even just felt confident about how things would turn out. Mostly they hoped, gambled, and fought.

The “truths” listed in subsequent clauses of the sentence are declared “self-evident” largely as reference to the body of British legal precedent which formed the original common law. A somewhat implicit set of guiding principles underlay the trends in jurisprudence of the day, often derived just as firmly from the arguments of famous philosophers as from the decisions of judges in previous cases. The rights of freeborn British subjects were “self-evident” not only because of the writings of a few authors defending the idea of the social contract, but because a combination of legislation and judicial case law  agreed that such rights existed. Consensus and a priori logic are very different things, though, and without context it is not difficult to conclude that the only argument being made was one of first principles, and that it had no defense beyond faith and solid rhetoric. Law and philosophy were relatively intertwined at the time, and while they still are to a degree today, the legacy of reliance upon logic seems to have become somewhat separated from the modern creation of laws. The other inheritor of the logic and reasoning of philosophy, science, has led to spectacular advances in technology and the understanding of the physical world while only infrequently being used to inform the lawmaking process.

 

The second clause, “that all men are created equal,” is quite possibly the most significant concept ever written by an American. So many people believe this so unquestioningly that anyone who dares to publicly doubt it is shunned by their peers and anyone else who happens to hear of it. Equality is the single underlying principle which has driven American political theory for hundreds of years. The concept of equality has been very selectively applied throughout history, however. Typically, it hasn’t even meant the same thing to different people living at the same time. Even while the Declaration was being written, there were dissenting schools within the Congress. Some thought that “men” should only include free citizens, others that it should only be free men of financial means who owned a certain minimum amount of land.

The hypocrisy of a slaveholding gentleman declaring that all men were equals was not lost on the people of the 18th century. Nor was the misogyny inherent in referring only to men, and thus implying through omission inequality for women. Flawed though it was, it is important to remember that the claim of even limited equality was revolutionary. Not because it was an original idea, but because an old idea was being taken to a radical new extreme. Although it is a far cry from declaring universal suffrage, “all men are created equal” was written to refute the idea that special privileges should be afforded to any person merely due to their inherited “nobility”.

Modern interpretations of equality are more radical, and deservedly so. But if the foundation of universal equality does not lie explicitly within the philosophical arguments presented within the Declaration, where can it be found? Textually, civil equality is written out in the Bill of Rights, the Civil Rights Amendments, and the 19th Amendment, but they are hardly what could be called a philosophical foundation – more the consequence of a series of arguments being codified only after significant debate (and open war, in the case of the Civil Rights Amendments).

The true foundation of universal equality is an interwoven jumble of idealism, cynicism, and utility. Humanity is an umbrella of diversity and individual complexity, and consequently any group of people will inevitably face conflict. The larger the population, the more opportunities there will be fore conflict between individuals. Conflict of any stripe threatens a person’s homeostasis (homeostatic equilibrium being one of the proven psychological goals of essentially all persons), so it follows that when individuals interact, the best way to stay balanced and alive is to abide behavioral guidelines which facilitate other members of the population staying balanced and alive as well. In short form, it’s a good general rule not to take actions which would affect other people in such a way that you wouldn’t want affecting you. The iterative nature of social interactions coupled with the formative conditioning by which the human mind learns, provides a viable mechanism whereby such behavioral limitations reliably result in significant mitigation of personal risk.

 

In the third clause of the sentence, “that they are endowed by their Creator with certain unalienable rights,” special focus should be given to the words “Creator” and “unalienable rights”. The Christianity of the authors and signatories of the Declaration is not in doubt. For centuries in Europe, and then in America as well, the study of religion and philosophy went hand-in-hand. Many of the arguments best associated with western liberalism come directly from the analysis and criticism of early Christian scholars like Augustine of Hippo.

By the time of the American Revolution, while many ancient religious claims had been tested through long debate, the precept that an omnipotent deity had played a direct role in the existence of humankind on Earth was essentially unquestioned. Jefferson himself was an outspoken deist, believing that an all-powerful God had established the world and the laws of nature, then allowed them to play out like a magnificent clockwork towards the fruition of a grand plan.

The concept that people are created is not unsupported by secular theories which do not rely on the guiding hand of an all-powerful God. In fact, people are literally created biologically by their parents and psychologically by their interactions with social and environmental factors throughout their lives (the nature vs. nurture debate is decidedly more nature and nurture than just one or the other). The meaning of the third clause translates best into modern English as “they are endowed by [virtue of their humanity]…” Itself a very broad statement that shares with the original an acknowledgement of the diverse nature of the human population. As with the claims of equality, this was something which was originally fairly limited in meaning, but when carried to its logical conclusion it is necessarily inclusive and universal.

When it comes to the “unalienable rights” people are endowed with, there is significant modern debate over precisely what rights are – and thus what it means for a government to protect them. Given the forms of taxation contemporary to the beginning of the American Revolution, the forced extraditions, and the animus tensions between colonists and the British military, it makes contextual sense for “rights”, which might have been seen more as privileges to the British government to be dispense by their authority, to be deemed “unalienable” in the Declaration as protest against the perception that such dispensed privileges were being infringed upon. Between this language and the subsequent debate after the Revolution between the authors of the Federalist Papers and the Antifederalist Papers, which lead to the passage of the Bill of Rights as explicit restrictions placed on the powers of government to protect the governed, the debate about what a “right” is has begun to assume, at least in form, that the powers granted to the government come at the people’s expense paid in freedoms. The insistence that there are “unalienable rights” that need protecting appears to have come to serve the growth of government powers to a degree that has gone far beyond the original issues the Revolution was supposed to have been fought to overcome.

Taken to its logical conclusion, the “unalienable” nature of certain rights should, in a philosophical sense, make such practices as incarceration untenable. The Due Process clauses of the 5th and 14th Amendments provide a legal argument against the logic, constitutionally providing governments with the tools to target specific parts of the populace and deprive them of rights otherwise considered “unalienable”. Like a grand game of bait-and-switch, Constitutional “rights” are held out to dazzle even the most adept observers while the authority of government has expanded into the spaces between explicit protections. It turns out that all rights are alienable, and the meat of the third clause was merely protest rhetoric.

Enumerated in the final clause of the sentence are the key unalienable rights, “that among these are Life, Liberty, and the pursuit of Happiness.” This phrasing bears striking resemblance to the protected rights described by John Locke just under a century before the Declaration was drafted. Locke’s list included “life, liberty, and property”. If there had been any question regarding which side of the contemporary social contract debate the Continental Congress was arguing from, so directly referencing Locke would have provided a firm answer. Thomas Hobbes’ depiction of the social contract as a king holding absolute power for the protection and welfare of his subjects was what the American Patriots wanted the British Crown to be seen as, so it made sense to portray themselves as part of the philosophical school that opposed Hobbes’ ideas.

Having a right to “life” is pretty straightforward; a claim that people are their own and not merely resources for exploitation. So, too, is the right to “liberty”; not only were arbitrary imprisonments common during the period of time before and during the Revolution, but the idea of restricting how colonists could interact economically both internationally and between colonies was a major threat to the planters and mercantilists throughout America who relied on their margins of profit to maintain not only their stations in society, but their livelihoods.

But why should the right to “property” used by Locke be replaced with a right to “the pursuit of Happiness”? Property is much more straightforward, and does a good job rhetorically pounding home the resistance against the myriad taxes which had been placed on the American colonists after the North American campaigns of the Seven Years War. Indeed, many academics today have argued that “property” and “pursuit of Happiness” are synonymous, somehow claiming that the pursuit of property is the more proper phrasing and that the Declaration is only worded differently from Locke so that it would appear more original. A common counter argument is that Jefferson in particular among the group who drafted the Declaration was advocating for an early form of the American Dream – that people should by right be free to live by their conscience so long as they do not prevent anyone else from doing the same.

The two sides need not be treated as diametric opposites, and they both describe part of a broader principle of human psychology: the desire for homeostatic equilibrium. It is unlikely that “the pursuit of Happiness” was somehow a deliberate anticipation of a psychological theorem only developed some two hundred years after the Declaration was written. But it does stand to reason that because both Jefferson and modern psychologists examined the same human condition, they might well have come to similar conclusions despite vastly differing methods. Regardless, the form of the Declaration has long-informed the way people think about the opinions and motivations of early American political philosophers and actors.

The Declaration of  Independence was a significant work in the tradition of American political philosophy, and it has long been held up as evidence of a valid a priori depiction of how people should be governed. At best it shows how the political elites of the Revolutionary era wished to frame their support for war in the language of the English legal tradition. Any honest attempt to discern true first principles underlying human behavior must not be tied to a political rhetoric of any period in time, and instead focus on objective scientific studies of how people behave and interact both across the globe and throughout history.

Gallery

Thinking With Symbols

Symbols are a common part of human thought, whereby complicated and multifaceted experiences and parts of experiences are grouped together within the mind of an individuals an aid to more rapid and complex cognition. A primary example of this symbolic thought is language. Children are born unable to understand and articulate much of anything beyond very basic emotions because their minds do not have use of symbolic language. Yet by the time they can walk, children can typically make at least some use of words – and by the time one is an adult, an individual is often so used to thinking with language that the alternative is elusive and difficult to conceptualize.

Language is among the set of symbols individuals make use of which are informed by external stimuli – created by some people and actively or passively taught to others (perhaps not exclusively, certainly there are artificial or synthetic languages which are only used by their creators). But other symbols abound. Pictures, statues, buildings, personal tokens, all of these are physical objects which can be assigned symbolic importance. As words can exist without writing or even sound (if you think about it), symbols can exist within the minds of individuals without the need of attachment to a physical object. Emotions or life-choices can be held as symbolic, and rituals are behavioral patterns which have been assigned symbolic meaning. What makes something a symbol is not what it is or how it is classified, but only that the thing is seen as representing something else. Sometimes the symbolic meaning of a thing can be so significant to an individual,or the original thing so insignificant, that the original thing is only ever perceived to be the symbol by that individual.

A person can spontaneously assign symbolic meanings to the world around them. It is typical that the symbolism begins with a close relationship to the symbolic object and tends to evolve throughout the life of that individual as well as iteratively while it is communicated between individuals. Thus, the evolution of cultural norms of behavior and expectation within and between generations both shape and are shaped by such symbolism.

A pervasive set of symbolism pertains to deities, faith, religion, spirituality, superstition and mysticism. The human mind is designed to use symbolic thought as a significant cognitive aid when problem solving, a learning systems which constantly compares different parts of the world around it, and has evolutionarily developed the ability to seek out why things happen and not just what has happened. Because of this, when mysteries in the environment appear otherwise unsolvable, it is a natural consequence that many individuals will assign the mysterious events familiar qualities so that they are less scary and easier to explain. Personification is a particularly prevalent and powerful means to this end, because what could be more familiar than the mysteries of other people’s thoughts and motivations? Other people’s behaviors are so familiar, in fact, that entire systems of symbols have been devised to help understand them.

Once these personifications are spread, due to the factors that stable symbolic meanings across populations and time, they can become increasingly significant to the individuals who have accepted them and, in various ways, universalized. One tree is sacred, so other trees are sacred; that river floods and changes the environment, thus it is doing so at the behest of a powerful spirit, and then other rivers must have spirits as well, etc. Through the complicated back-and-forth of such assignments of symbolic meaning, both within the mind of a single individual and between people who share ideas, systems of such belief can develop as a kind of logic is applied to the myriad of symbolic explanations for how and why the world works.

Historically, these symbolic systems were used to describe and legitimate the sustenance of hierarchical social relationships as early as the written record extends – back to Sumer and the city-states of Mesopotamia. Such societies grew to considerable population and complexity, and it was perhaps a need for those in power positions to fulfill the expectations of their own symbolically defined systems through stability and organization that written language, mathematics, and systematic exchange were developed – as well as improving processes for agriculture, the construction of buildings and roads, and the development of new weapons technologies, providing those who carried out the violence which was necessary to maintain the hierarchical social order more efficient means to back up their threats as populations grew.

The modern conception of the social contract is itself a symbol. That symbol unifies people into a society and attempts to set the tone for subsequent discussions regarding the specifics of the ongoing relationships between individuals in positions of hierarchical power, and the rest of the population. In the United States of America, that symbol takes a form similar to many before it of the same tradition, that the government should be just by adhering to the principles of “all men are created equal…” and the terms of the Constitution; that it should be reserved in its execution of violence by not behaving as the British empire that preceded it in authority over its claimed territories; and that its authority to exert power, through violence or merely the veiled threat thereof, is derived wholly from the consent of the individuals whom it claims to represent.

So long as the government which claims its authority through consent relies upon the induction of fear through the threat and use of violence to enforce its laws and authority, the symbol by which that authority is claimed is little more than a fraud perpetrated by those first conquerors over their victims. In the presence of universal violence, authority will inevitably be enforced through violence. In its absence, such symbolic authority can only be enforced through the aggregate faith of the individuals comprising the government’s constituency. This is necessarily a fragile legitimacy, easily lost when those in power violate the restrictions of their symbolic contract. What the American government has done since at least the middle of the 20th century is attempt to balance the use of secret violence hidden from the public, pervasive dissemination of the fear that alternatives to the present power structure will with certainty bring about death and terror, and the use of that fear and secrecy to mask the continued reduction of the adherence by the people within government to the symbolic contract and the faith of the public in favor of the presumably superior stability of enforcement through violence.

Gallery

The Social Contract and Natural Law in the 21st Century

Thomas Hobbes is credited with first explicitly laying out what became known as social contract theory, but the idea can be traced through Machiavelli to Augustine of Hippo, and all the way to ancient Greece and the two generations of philosophical work from Plato and Aristotle. Today, the concept is so ubiquitous within the western tradition that its model appears fairly simple: in order to avoid the chaos and danger of systemic anarchy, it is necessary for “the people” of a place to surrender some of the absolute freedom that an anarchist environment would provide, and submit to the laws restricting behavior as dictated by an hierarchical government – in American, one which holds its authority only with the consent of “the people.” When the model is broken apart and examined piecemeal, however, the seemingly common-sense assumptions propping it up may seem somewhat less obvious.

The first premise, that absolute freedom for individuals leads to chaos, stems from the state of nature theory found at least as early as republic era Roman law. The idea of the state of nature is that there is a categorical difference between rules established by people (laws of nations) and those which exist even in the absence of people (laws of nature). The theory arose again during the Renaissance, again premised on this delineation and presumed to harken back to a theoretical existence before the creation of the first state. In this pre-state environment, the only laws that applied to the people were those of nature. As such, the theory goes, because every individual is only interested in advancing their own ends (presumption of self-interest), the resulting competition would inevitably lead to nearly absolute social chaos.

This, of course, ignores the role of social relationships such as those between close relatives, members of a household, romantic couples, close friends, all of which effectively bind people together socially, and focuses only on the relations of rivalry, enmity, etc… which compel people to be at odds with one-another. Even the idea that the grouping of individuals into a discrete category, the people, at this level of a state of nature is contradictory, both with the assumption of total chaos and with the logical conclusion that the only categorical designations which would apply would be those within the perspective of each individual.

The concept of natural law, from the Roman Republic on, establishes a sound principle. There are rules which exist outside of those which are invented by humans. Those laws are the immutable guidelines of the physical world that are theorized through scientific study in physics, chemistry, biology, and all the other scientific disciplines. While Roman law appears to have seen nature as something to be conquered and superseded by the laws of nations, the truth is that these natural laws as wholly separate from human laws because they cannot be conquered or superseded.

Human laws, on the other hand, are behavioral restrictions enforced through threats of violence in one form or another, and executed by means of human action. When human behavior is observed in the absence of these restrictions, the result is not a war of all against all as Hobbes assumed, but instead a complex system of social arrangements between individuals, based upon their choices as informed by the ever-changing but understandable system emotions, symbols, and logic which shape any individual’s perspective. Thus, family groups and friendship networks will tend to work together. And when competition does present as a factor within social relationships, it will frequently occur in such a way as to avoid violent confrontation between individuals. Only when competition is between disparate groups which are so separated by territory or social network that other motivating factors push aside the desire to avoid violence, or when relationships of trust between family and friends are rejected by the parties involved, does interpersonal conflict reliably result in violence.

This is not at all a claim that people who live in non-state societies will never have violence between members of the same social network. Rather that what violence would occur would be far from absolute, and not a consequence of total social chaos as suggested by the traditional theory of natural law. Individuals in any society tend towards emotional and symbolic reasoning at least as much as rational decision making in determining their choices. Between the two, there is sufficient motivation from neurological predisposition and the drive to establish and maintain homeostatic equilibrium that individuals within a population will reliably engage in social behavior if provided the opportunity.

Also rejected is the assumption that “the people” will make a universal and abstract choice to give up freedom in exchange for social stability. The absolute chaos described by Hobbes does not exist in naturally occurring groups of stateless peoples, and historically such chaos has only remotely presented when the people involved were exposed to such an extreme level of iterated violence, a level so far beyond the baseline of stateless societies, that the various structures of sociability were rendered wholly ineffective.

Such synthetic violence is present almost exclusively throughout anthropology (typically in prehistoric circumstances), when one group of individuals relies on coercion to establish their hierarchical superiority over other groups – which are subsequently bound together by their shared fear of coercion into something resembling a unified category (symbolically representable as us against them). Systematic violence does not itself come as a consequence of non-state organizations universally, only from those groupings of individuals which have chosen to replace the social symbol of us and them with us against them. Conceivably, the role of fear within the cognitive processes of the populations targeted by such violence might even inform the growth of sociality within such populations, merging them together into a group closer to the hypothetical of “the people” seen as an essential element of a state.

It follows that the artificial elevation of one group over another (hierarchy) through the use and threat of sufficiently elevated violence can lead to the creation of government, either through accepting the aggressors’ terms and threats or defending against them. It is not the spontaneous consensus of “the people” out of the total chaos of a Hobbesian war of all-against-all which created the first “states” in history. In the process, there is resistance and disagreement about how to deal with the aggressors, and when a consensus is reached it is decided only by those who remain. Their descendants end up bound within a system of relations born from endemic violence, and are not themselves free from the symbolic us against them.

This is, of course, a very simplistic and hypothetical representation of the evolution of hierarchy and government within prehistoric populations. No doubt preceding conquest by the violent outsiders, similar dramas played out among smaller groupings of individuals, some even resulting in smaller-scale scenarios where hierarchy was developed based on violence. But these situations did not predominate those early settings, and it was not until the level of violence reached a point at which it could not be ignored or successfully defended by the individuals within geographically or symbolically delineated areas that the tide of pervasive egalitarianism turned up the level of violence within population and hierarchy was able to predominate.

The social contract is largely consequential of violence on a large scale, with perpetuating aggressors and victims. It is possible to honestly address the evolution of the role played by hierarchy in later iterations of historical events. Hierarchical overlords used (and still use) their ability to wield overwhelming violence to attain their needs and desires first by simply taking what they wanted from the people they conquered, subject peoples now symbolically owing their lives to their superiors for not being killed (yet). What is a portion of a family’s harvest, a crafter’s wares, or a woman’s fertility (and thus her freedom and dignity), when compared with that omnipresent fear of imminent and violent death?

The need to choose between death, exodus, or subservience is continued when an individual is in the role of “subject” within such a hierarchical relationship. Those who do not leave or die from resisting the authority of their superiors, have either taken control of the hierarchy themselves, or have chosen to submit and endeavor to make ends meet within their circumstance. Acceptance can thus become symbolic incorporated into the perspectives of entire groups of people, manifested as the hierarchical obligations of tribute. The desire to maintain homeostatic equilibrium becomes satisfied when the tribute obligations do not overwhelm the abilities of subject individuals to provide for themselves. Symbolic traditionalism regarding iterative tribute obligations is adopted by both rulers and their subjects, serving to develop and maintain equilibrium across the affected population.

Throughout history the social contract has not served to reduce violence between people, but instead to regiment and normalize violence in such a way as to maintain the system of traditional obligations between them. The argument that hyper-violent anarchy is the only alternative to the “restrained” violence of the state is invalidated by actual examination of human social behaviors, both historic and contemporary. Much of the philosophical framework for current liberal political systems has been built on inaccurate assumptions and faulty conclusions. There are certainly some valuable ideas that have come from political philosophy through the ages, but they need to be examined in the light of new knowledge and understanding.