Deep History of America’s Deep State

Exclusive: The idea of an elitist Deep State – erasing a “mistake”  by the people – pervades current efforts to remove buffoonish President Trump, but the concept has deep historical roots dating from the Founding, writes Jada Thacker.

By Jada Thacker

Everybody seems to be talking about the Deep State these days. Although the term appears to have entered the lexicon in the late 1990s, for years it referred only to shady foreign governments, certainly not to our own “indispensable nation.”

Does the sudden presence of an American Deep State – loosely defined as an unelected elite that manipulates the elected government to serve its own interests – pose a novel, even existential, threat to democracy?

Not exactly. The threat seems real enough, but it’s nothing new. Consider these facts: 230 years ago, an unelected group of elite Americans held a secretive meeting with an undisclosed agenda. Their purpose was not merely to manipulate lawful government in their own interests, but to abolish it altogether. In its place, they would install a radically undemocratic government – a “more perfect” government, they said – better suited to their investment portfolios.

History does not identify these conspirators as the Deep State. It calls them the Founders. The Founders did not consider themselves conspirators, but “republicans” – not in reference to any political party, but rather to their economic station in society. But their devotion to “republicanism” was transparently self-serving. A current college text, The American Journey: A History of the United States, explains though does not explicate “republican ideology”:

“Their main bulwark against tyranny was civil liberty, or maintaining the right of the people to participate in government. The people who did so, however, had to demonstrate virtue. To eighteenth century republicans, virtuous citizens were those who were focused not on their private interests but rather on what was good for the public as a whole.

They were necessarily property holders, since only those individuals could exercise an independence of judgment impossible for those dependent upon employers, landlords, masters, or (in the case of women and children) husbands and fathers.” [Emphasis supplied]

Republicanism was a handy idea if you happened to be a master or a landlord, who were the only persons this ideology considered “virtuous” enough to vote or hold political office. Thus, “republicanism” – virtually indistinguishable from today’s “neoliberalism” – created the original Deep State in the image of the economic system it was designed to perpetuate.

How this was accomplished is not a comforting tale. But it cannot be related nor understood without an appreciation of the historical context in which it occurred.

Masters and Servants

Post-colonial America was predominantly agrarian, and about 90 percent of the population was farmers. (The largest city in 1790 was New York, with a whopping population of 33,000 residents.) There was a small middle class of artisans, shopkeepers, and even a handful of industrial workers, but the politically and economically powerful people were the relatively few big-time merchants and landowners – who also fulfilled the function of bankers.

America was not quite a feudal society, but it resembled one. Commoners did not call at the front doors of the rich, but were received around back. Most states had official religions, some with compulsory church attendance backed by fines. Commodity-barter was the currency of the day for the vast majority. Debtors were imprisoned. Parents sold their children into bondage. It wasn’t what most people think of when they hear “Yankee Doodle Dandy.”

All states restricted voting only to men who owned a requisite amount of property, while the majority: un-widowed women, servants, and tenants owned no property. Moreover, most states had property requirements for eligibility to elective office, some with the higher offices reserved for those with the most property. Such restrictions had discriminated against the urban underclass and farmers since the beginning of American colonization.

Nobody at the time characterized this land of masters and servants as a “democracy.” Indeed, the master class considered “democracy” synonymous with “mob rule.” But not everybody was happy with “republican virtue” in post-war America, least of all the slaves of the “virtuous.”

The Revolutionary War had stirred passions among the servant class for social and economic liberty, but when the war ended nothing much had changed. In fact, the war proved not to have been a revolution at all, but represented only a change from British overlords to American overlords. Edmund Morgan, considered the dean of American history in the colonial era, characterized the “non-Revolutionary War” this way:

“The fact the lower ranks were involved in the contest should not obscure the fact that the contest itself was generally a struggle for office and power between members of an upper class: the new against the established.”

About 1 percent of the American population had died in a war fought, they had been told, for “liberty.” (Compare: if the U.S. lost the same proportion of its population in a war today, the result would be over three million dead Americans.) Yet after the war, economic liberty was nowhere in sight.

Moreover, the very concept of “liberty” meant one thing to a farmer and quite another to his rich landlord or merchant. Liberty for a common farmer – who was generally a subsistence farmer who did not farm to make money, but rather only to provide the necessities of life for his family – meant staying out of debt. Liberty for merchants and property owners – whose business it was to make monetary profits – meant retaining the ability to lend or rent to others and access to the power of government to enforce monetary repayment from debtors and tenants.

Much like the American Indians who had first communally owned the property now occupied by American subsistence farmers, agrarian debtors faced the unthinkable prospect of losing their ability to provide for their families (and their vote) if their land were confiscated for overdue taxes or debt. [See Consortiumnews.com’s “How Debt Conquered America.”]

Loss of their land would doom a freeholder to a life of tenancy. And the servitude of tenants and slaves differed mainly as a function of iron and paper: slaves were shackled by iron, tenants were shackled by debt contracts. But iron and paper were both backed by law.

By the end of the Revolutionary War, as few as a third of American farmers owned their own land. When the urban elites began to foreclose on the debts and raise the taxes of subsistence farmers – many of whom had fought a long and excruciating war to secure their “liberty” – it amounted to a direct assault on the last bastion of Americans’ economic independence.

The Original Great Recession

After the war, British merchants and banks no longer extended credit to Americans. Moreover, Britain refused to allow Americans to trade with its West Indies possessions. And, to make matters worse, the British Navy no longer protected American ships from North African pirates, effectively closing off Mediterranean commerce. Meanwhile, the American navy could not protect American shipping, in the Mediterranean or elsewhere, because America did not happen to possess a navy.

In the past, American merchants had obtained trade goods from British suppliers by “putting it on a tab” and paying for the goods later, after they had been sold. Too many Americans had reneged on those tabs after the Revolution, and the British now demanded “cash on the barrelhead” in the form of gold and silver coin before they would ship their goods to America.

As always, Americans had limited coin with which to make purchases. As the credit crunch cascaded downwards, wholesalers demanded cash payment from retailers, retailers demanded cash from customers. Merchants “called in” loans they had made to farmers, payable in coin. Farmers without coin were forced to sell off their hard-earned possessions, livestock, or land to raise the money, or risk court-enforced debt collection, which included not only the seizure and sale of their property but also imprisonment for debt.

The most prominent result of Americans’ war for “liberty” turned out to be a full-blown economic recession that lasted a decade. Even so, the recession would not have posed a life-threatening problem for land-owning subsistence farmers, who lived in materially self-sufficient, rural, communal societies. But when state governments began to raise taxes on farmers, payable only in unavailable gold and silver coin, even “self-sufficient” farmers found themselves at risk of losing their ability to feed their families.

Debt, Speculation, and the Deep State

The Continental Congress had attempted to pay for its war with Britain by printing paper money. The British undermined these so-called “Continental” dollars, not only by enticing American merchants with gold and silver, but by counterfeiting untold millions of Continental dollars and spending them into circulation. The aggregate result was the catastrophic devaluation of the Continental dollar, which by war’s end was worthless.

In the meantime, both Congress and state governments had borrowed to pay for “liberty.” By war’s end, war debt stood at $73 million, $60 million of which was owed to domestic creditors. It was a staggering sum of money. In his now studiously ignored masterpiece, An Economic Interpretation of the Constitution of the United States, historian Charles A. Beard showed that domestically-held war debt was equivalent to 10 percent of the value of all the surveyed land holdings (including houses) in the entire United States at the time.

The war debt carried interest, of course – which is a problem with debt if you owe it, but is a feature of debt if it is owed to you. Not only was “freedom not free” – it came with dividends attached for Deep State investors. This should sound at least vaguely familiar today.

As Continental paper money lost its value, Congress and state governments continued to pay for “liberty” with coin borrowed at interest. When that ran short, government paid only with promises to pay at a later date – merely pieces of paper that promised to pay coin (or land) at some indeterminate time after the war was won.

This was how the government supplied the troops (whenever it managed to do so) and also how it paid its troops. In actual practice, however, Congress often did not pay the troops anything, not even with paper promises, offering only verbal promises to pay them at the end of the war.

But war is never a money-making enterprise for government, and when it ended, the government was as broke as ever. So, it wrote its verbal promises on pieces of paper, and handed them to its discharged troops with a hearty Good Luck with That! Even so, Congress paid the soldiers in bonds worth only a fraction of the amount of time most had served, promising (again!) to pay the balance later – which it never did.

Thousands of steadfast, longsuffering troops were abandoned this way. Most had not been paid any money in years (if ever), and many were hundreds of miles from their homes – ill, injured, and starving – as they had been for months and years. Others literally were dressed only in rags or pieces of rags. Some carried paper promises of money; some carried paper promises of geographically distant land – none of which would be available until years in the future, if at all.

Seven-year Revolutionary War veteran Philip Mead described his plight in a bitter memoir entitled A Narrative of Some of the Adventures, Dangers and Sufferings of a Revolutionary Soldier: “We were absolutely, literally starved. I do solemnly declare that I did not put a single morsel of victuals in my mouth for four days and as many nights, except a little black birch bark which I gnawed off a stick of wood, if that can be called victuals. I saw several of the men roast their old shoes and eat them….

“When the country had drained the last drop of service it could screw out of the poor soldiers, they were turned adrift like old worn-out horses, and nothing said about land to pasture them on.”

Was this liberty? To impoverished veterans, “liberty” looked bleak, indeed. To speculators in government bonds, liberty looked like a golden opportunity, quite literally so.

Vultures possessed of coin swooped in and bought a dollar’s worth of government promises for a dime, and sometimes for just a nickel. Speculators wheedled promises not only from desperate veterans (many of whom sold their promises merely to obtain food and clothes on their long trudge home), but from a host of people whose goods or services had been paid with IOUs.

Optimistic speculators cadged bonds from pessimistic speculators. The more desperate people became during the recession, the more cheaply they sold their promises to those who were not.

Speculators expected their investments, even those made with now-worthless paper money, to be paid in gold or silver coin. What’s more, “insiders” expected all those various government promises would eventually be converted – quietly, if possible – into interest-bearing bonds backed by a single, powerful taxing authority. All the Deep State needed now was a national government to secure the investment scheme. A man named Daniel Shays unwittingly helped to fulfil that need.

Rebellion and Backlash

Thomas Jefferson penned the famous sentence: “The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.” He was not referring to heroic American Patriots charging up Bunker Hill against British bayonets. He was referring instead to American farmers – many of whom had been the starving soldiers in a war for forsaken liberty – taking their lives into their hands to oppose the tax policies of the government of Massachusetts in 1787. The principal leader of this revolt was a farmer and war veteran Daniel Shays.

In a sense, the most interesting thing about Shays’s Rebellion is that it was not a unique event.

The first notable example of agrarian revolt had been Bacon’s Rebellion in 1676 Virginia, when frontier farmers marched on the rich plantation owners of Jamestown, burned it to the ground, published their democratic “Declaration of the People,” and threatened to hang every elite “tyrant” on their list – which included some of the forefathers of America’s patriot Founders.

Historian Gary Nash reminds us Bacon’s Rebellion had echoes across early American history: “Outbreaks of disorder punctuated the last quarter of the 17th century, toppling established governments in Massachusetts, New York, Maryland, Virginia, and North Carolina.” Jimmy Carter, in The Hornet’s Nest, the only novel ever published by an American president, tells a similar story of the agony of dispossessed farmers in Georgia a century later.

Other farmers had rebelled in New Jersey in the 1740s; in the New York Hudson Valley rent wars in the 1750s and 1760s and concurrently in Vermont by Ethan Allen’s Green Mountain Boys; for a decade in North Carolina in the 1760s, where vigilantes called Regulators battled the government of the urban elite; and in Virginia in the 1770s. Likewise, American cities had been scenes of labor unrest, riots, and strikes for a century. American class rebellion, apparently unbeknownst to most history teachers in America, was closer to the rule than the exception.

Victory in the war against England only intensified the conflict between those who considered “liberty” as a necessary condition to live without debt, against those who considered “liberty” to be their class privilege to grow rich from the debts others owed them. Howard Zinn, in his A People’s History of the United States describes the economic realities of Eighteenth Century America:

“The colonies, it seems, were societies of contending classes – a fact obscured by the emphasis, in traditional histories, on the external struggle against England, the unity of colonists in the Revolution. The country therefore was not ‘born free’ but born slave and free, servant and master, tenant and landlord, poor and rich.”

Although Shays’s Rebellion was not unique, it was a huge event, coming at a time when the rich were owed a great deal of money by impoverished governments. Pressured by rich bondholders and speculators, the government of Massachusetts duly raised taxes on farmers. To make matters far worse, the taxes were to be paid only in gold or silver – which was completely out of the question for most western farmers, who had no way to obtain coined money.

When the farmers complained, their complaints were ignored. When farmers petitioned the government to issue paper money and accept it as payment of debts and taxes, the government refused their petitions. When the farmers pleaded for the passage of “legal tender laws” that would allow them to settle their debts or taxes with their labor, they were rebuffed.

But when farmers could not pay what they did not have, the Massachusetts’s courts ordered their land seized and auctioned. At last, the farmers understood the practical effect, if not the specific intent, of the tax: confiscation of their property and its transfer to the rich, to whom the government owed its interest-bearing debt. Government had become an armed collection agency.

To the utter dismay of the erstwhile proudly tax-rebellious Patriots, the farmers too rebelled. Shaysites forcibly shut down the tax courts that were condemning them to servitude. The rich responded by loaning the destitute government more money (at interest!) to pay a militia force to oppose Shays’s rebels.

At this point, tax rebels abandoned reform for radical revolution and – in a resounding echo of Nathaniel Bacon’s century-old Declaration of the People – pledged to march on Boston and burn it to the ground. This was no Tea Party vandalism, stage-managed by well-to-do Bostonians like Samuel Adams. It was a full-blown, grassroots agrarian revolution a century in the making.

The urban bond-holding merchant-class in Boston and elsewhere panicked. And none panicked more than bond speculators, who intimately understood the rebels threatened their “virtuous” republican “liberty” to extract profit from others. Historian Woody Holton exposes the astonishing callousness of one of America’s major bond speculators in his nationally acclaimed Unruly Americans and the Origin of the Constitution:

“As a bondholder, Abigail Adams would benefit immensely if her fellow Massachusetts citizens [paid the tax] levied by the legislature in March 1786, but she also saw compliance as a sacred duty. If Massachusetts taxpayers were ‘harder-prest by publick burdens than formerly,’ she wrote, ‘they should consider it as the price of their freedom’.”

Future First Lady Abigail Adams was not alone in thinking freedom came with dividends payable to her account. Historian David Szatmary reminds us in his Shays Rebellion; The Makings of an Agrarian Insurrection that the former Patriot leadership, especially those in the merchant class, were among the first to advocate violence against democratic rebellion.

Said a published opinion piece at the time: “When we had other rulers, committees and conventions of the people were lawful – they were then necessary; but since I myself became a ruler, they cease to be lawful – the people have no right to examine my conduct.”

Showboat Patriot and bond speculator Samuel Adams –former mastermind of the Boston Tea Party and erstwhile propagandist against unfair British taxes (as well as cousin to Abigail’s husband John Adams) – sponsored a Massachusetts law that allowed sheriffs to kill tax protesters outright.

Another rich bondholder and speculator, ex-Revolutionary War General Henry Knox (the fitting namesake of Fort Knox, the famous repository of gold bullion) wrote an alarming letter to his former commander George Washington, accusing the Shays’s rebels of being “levelers” (which was the closest term to “communists” then in existence). He informed Washington that the country needed a much stronger government (and military) to prevent any riffraff challenge to the elite. His message was not wasted on General Washington, America’s richest slave owner.

In the end, the Congress, under the Articles of Confederation, could raise no money from the states to provide an army, but the privately-financed, for-profit Massachusetts militia successfully defeated Shays’s rebels. Still, the nearly hysterical fear of democratic economic revolution had been planted in the minds of the masters. Shays’s Rebellion proved to be the last straw for bond speculators whose profits were jeopardized by democracy.

Worse even, the governments of many other states were beginning to cave under intense democratic pressure from rebellious debtors. Some states were entertaining laws that prevented the seizure of property for debt; others were creating paper money in order to break the gold and silver monopoly. Rhode Island not only voted in a paper money system, but threatened to socialize all commercial business enterprises in the state.

In response to the threat of populism, the “virtuous” elite reacted decisively – not to remedy the plight of debtors, of course – but to secure their own profits from them. Accordingly, in 1786, five states sent delegates to meet at Annapolis, Maryland, just as Shays’s Rebellion veered into revolution. This unelected minority called for Congress to authorize a convention to be held in Philadelphia the next year “for the sole and express purpose of revising the Articles of Confederation.” The Articles were never to be “revised.” They were to be scrapped altogether by the Deep State.

The Deep State Conspires

Thanks to Charles A. Beard’s An Economic Interpretation of the Constitution of the United States, we know quite a lot about the status of the 55 men who conspired to draft the Constitution. But the very first thing we need to know is that they were not authorized by “We the People” simply because nobody had voted for them; all were political appointees.

Nor were they even a representative sample of the people. Not a single person in the Convention hall “worked for a living,” nor was female, nor was a person of color. Only one claimed to be a “farmer,” the current occupation of about 90 percent of the population. Most were lawyers. Go figure.

If the delegates represented anybody at all, it was the economic elite: 80 percent were bondholders; 44 percent were money-lenders; 27 percent were slave owners; and 25 percent were real estate speculators. Demographically, the 39 who finally signed the final draft of the Constitution constituted .001 percent of the American population reported in the 1790 census. George Washington, who presided, was arguably the wealthiest man in the country. Deep State gamblers all.

And the stakes were high. Recall that the face value of outstanding domestic government bonds in 1787 was $60 million, equivalent to 10 percent of the total improved land value of the country. But these bonds, for the most part, had been obtained by speculators at a fraction of face value. Beard very conservatively estimated the profit of speculators – if the bond were redeemed at face value – would have been some $40 million. Expressed as the same proportion of total improved land value at the time of the Founding, the expected profit from government bonds held then would equal at least $3 trillion today. Tax free.

We still do not know everything that transpired at the convention. No one was assigned to keep a record of what was discussed. Reportedly, even the windows to the meeting hall were nailed shut to prevent eavesdropping – though there would be “leaks.” Because of its secrecy and its unauthorized nature, some historians have called the convention “the second American Revolution.” But revolutions are public, hugely participatory events. This was a coup d’état behind locked doors.

Most delegates presumably understood their undisclosed purpose was to dump the whole system of confederated government (which had cost 25,000 American lives to secure) into a dustbin. They evidently did not intend to obey their instructions “solely to revise” the Articles because a number of them showed up at the convention with drafts for a new constitution in hand.

The conspirators’ ultimate goal was to replace the Confederation with what they later euphemized as “a more perfect Union” – designed from the outset to protect their class interests and to ensure the new government possessed all the power necessary to perpetuate the existing oligarchy.

At the Convention, Alexander Hamilton captured the prevailing sentiment: “All communities divide themselves into the few and the many. The first are the rich and well-born; the other the mass of the people … turbulent and changing, they seldom judge or determine right. Give therefore to the first class a distinct, permanent share in the Government. … Nothing but a permanent body can check the imprudence of democracy.”

Hamilton further proposed that both the President and the Senate be appointed (not elected) for life. His vision was but half a step removed from monarchy. Though not a Convention delegate, John Jay, Hamilton’s political ally, slaveowner, and the first Chief Justice of the Supreme Court, stated the purpose of “republicanism” with brutal brevity: “The people who own the country ought to govern it.”

The Founders never once envisioned any such a thing as “limited government” – unless perhaps in the sense that the power of government was to be limited to their own economic class. [See Consortiumnews.com’s “The Right’s Made-up Constitution.”]

In Towards an American Revolution: Exposing the Constitution & Other Illusions, historian Jerry Fresia sums the Founders’ views succinctly: “The vision of the Framers, even for Franklin and Jefferson who were less fearful of the politics of the common people than most, was that of a strong centralized state, a nation whose commerce and trade stretched around the world. In a word, the vision was one of empire where property owners would govern themselves.” [Emphasis supplied]

Self-government by the people was to remain permanently out of the question. The Deep State was to govern itself. “We the People,” a phrase hypocritically coined by the ultra-aristocrat Gouverneur Morris, would stand forever after as an Orwellian hoax.

The tricky task of the hand-picked delegates was to hammer out a radical new system of government that would superficially resemble a democratic republic, but function as an oligarchy.

William Hogeland’s excellent Founding Finance, recounts the anti-democratic vehemence expressed at the Convention: “On the first day of the meeting that would become known as the United States Constitutional Convention, Edmund Randolph of Virginia kicked off the proceedings […] ‘Our chief danger,’ Randolph announced, ‘arises from the democratic parts of our constitutions. … None of the constitutions’ – he meant those of the states’ governments – ‘have provided sufficient checks against the democracy.’”

No wonder they nailed the windows shut. It should be no surprise that the word “democracy” does not appear once in the entire U.S. Constitution, or any of its Amendments, including the Bill of Rights. Accordingly, the Constitution does not once refer to the popular vote, and it did not guarantee a single person or group suffrage until the adoption of the 15th Amendment in 1870, over 80 years after ratification. The Preamble aside, the Founders used the phrase “the People” only a single time (Art. I, Sec. 2).

It has been suggested the word “democracy” had a different meaning then than it has now. It did not. “Democracy” to the Convention delegates meant the same thing as it does today: “rule by the people.” That’s why they detested it. The delegates considered themselves the patriarchs of “republicanism,” the ideology that rejected participation in government by people like their wives, servants, tenants, slaves, and other non-propertied inferiors. No doubt, the delegates passionately disagreed on many things, but the “fear and loathing” of democracy was not one of them. Then or now.

The Deep State’s Specific Goals

Embedded within the Founders’ broadly anti-democratic agenda were four specific goals. These were not a list of items jotted down in advance, but were derived by group consensus as the minimum requirements necessary to achieve the Deep State’s ultimate agenda.

To camouflage the stark oligarchic nationalism the measures intended, the Founders disingenuously styled themselves “Federalists.” But nothing about these measures concerned a “federation” of sovereign states; taken together, they were intended to demolish the existing “perpetual” confederation, not to re-create it more effectively.

National government with limited citizen participation. Of all the measures required to achieve a national oligarchy, this was the most daunting. It was achieved by a wide array of provisions.

The Electoral College. The President and Vice President are not elected by popular vote, but by electors – then and now. For example, when George Washington was first elected President, the American population was 3.9 million. How many of those folks voted for George? Exactly 69 persons – which was the total number of electors voting at the time. (Art. I, Sec. 3)

Bi-Cameral Congress. Congress is bi-cameral, composed of two “houses” – the House of Representatives and the Senate. Under the original Constitution, the House members represented the people who vote for them, while the Senate represented states, not persons, and was therefore not a democratic body, at all. It was generally expected that the Senate would “check” the democratic House. Indeed, this was the entire purpose of bi-cameralism wherever it has existed. (Art. I, Secs. 1 and 2)

State Appointment of Senators. Senators were originally appointed by state legislatures (until the 17th Amendment in 1913). It was expected that the Senate would function in Congress as the House of Lords functioned in Parliament: the voice of the aristocracy. Even though Senators are now popularly elected, it is far more difficult to challenge an incumbent because of the prohibitive expense of running a state-wide campaign. (Art. I, Sec. 3)

Appointment of the Judiciary. All federal judges are appointed for life terms by the President and confirmed by the (originally undemocratic) Senate. (Art. III, Sec. 1)

Paucity of Representation. Most undemocratic of all was the extreme paucity of the total number of House members. The House originally was composed of only 65 members, or one member per 60,000 persons. Today, there are 435 members, each representing about 700,000 persons. Thus, current House representation of the public is 12 times less democratic than when the Constitution was written – and it was poor (at best) then.

Compare: The day before the Constitution was ratified, the people of the 13 United States were represented by about 2,000 democratically elected representatives in their various state legislatures (1:1950 ratio); the day after ratification, the same number of people were to be represented by only 65 representatives in the national government (1:60000). In quantitative terms, this represents more than a 3,000 percent reduction of democratic representation for the American people. (Art. I, Sec. 2)

Absence of Congressional Districts. Although House members now run for election in equal-populated districts, the districts were created by Congress, not the Constitution. Until the 1960s, some House members were elected at-large (like Senators). This disadvantaged all but the richest and best-known candidates from winning. (Not referenced in Constitution)

Absence of Recall, Initiative and Referendum. The Constitution does not allow the people to vote to recall (un-elect) a Congress member, demand a Congressional vote on any issue (propose an initiative) or vote directly in a referendum on any issue (direct democracy). (Not referenced in Constitution)

Absence of Independent Amendment Process. One of the reasons Americans now have professional politicians is that the Constitution does not provide a way for “the people” to amend it without the required cooperation of a sitting Congress. At the Constitutional convention, Edmund Randolph of Virginia (surprisingly) proposed that the people be afforded a way to amend the Constitution without the participation of Congress. This excellent idea, however, was not adopted. (Art. V)

National authority to tax citizens directly. (Art. I, Sec. 8; 16th Amendment)

National monopolization of military power. (Art. I, Sec.8, clauses 12, 13, 14, 15, 16)

Denial of states’ power to issue paper money or provide debtor relief. (Art. I, Sec.10; Art. I Sec.8, clause 4)

All of these provisions were completely new in the American experience. For 150 years or more, citizen participation in government, independent militias, and the issuance of paper money had been the prerogative of the several, independent colonies/states – while direct external taxation had been universally and strenuously resisted. When the British Crown had threatened to curtail colonial prerogatives, the very men who now conspired for national power had risen in armed rebellion. The hypocrisy was stunning. And people took note of the fact.

Consent of the Minority

One of the note-takers was Robert Yates, a New York delegate to the Convention, who had walked out in protest. Not long afterwards, Yates (who owned no government bonds) stated his objection to the new Constitution: “This government is to possess absolute and uncontrollable power, legislative, executive and judicial, with respect to every object to which it extends. …

“The government then, so far as it extends, is a complete one. … It has the authority to make laws which will affect the lives, the liberty, and the property of every man in the United States; nor can the constitution or the laws of any state, in any way prevent or impede the full and complete execution of every power given.”

At least half of the American population (collectively called “Anti-federalists”) thought the Constitution was a terrible idea. To be sure, well-to-do Anti-federalists like Yates were not overtaxed farmers, and their objections were often based upon the defense of states’ rights, not peoples’ economic rights. Most Anti-federalists, however, seemed alarmed that the Constitution contained no guarantee of the basic political rights they had enjoyed under the British Empire, such as freedom of speech or trial by jury.

The debate between supporters and critics of the Constitution raged for a year, while partisan newspapers published articles both pro and con. A collection of 85 “pro” articles is known now as The Federalist Papers, which were written by Alexander Hamilton, James Madison and John Jay. Although these articles have been studied almost as religious relicts by historians, they do not tell us “what the Constitution really means.”

The Constitution means what it says. The Federalist Papers are sales brochures, written by lawyers trying to get others to “buy” the Constitution. The same can be said about a similar collection of “Anti-federalist Papers,” from which Yates’s quote above was taken. In any event, it is up to the courts to interpret the Constitution, not lawyers with vested interests.

In due course, the Anti-federalists put their collective foot down. There would be no hope of ratification without amendments guaranteeing fundamental political – but not economic – rights. Although Hamilton argued a guarantee of rights would be “dangerous,” James Madison convinced the Federalists that agreeing to guarantee a future Bill of Rights would be much safer that meddling with the text of the current document, which might entail unraveling its core nationalist, anti-democratic agenda. And so, a deal was struck.

Even so, the battle over the ratification of the Constitution was not ultimately decided by the people of the nation. Although the people of the several states had not voted to authorize the Convention, or the document it had produced, the Founders had been incredibly arrogant, not to mention sly. Not only had they presented the unauthorized document to the states as a take-it-or-leave-it proposition (no changes allowed), but the document itself demanded that only special state “conventions” could ratify it – not the majority popular vote of the people.

Specifying ratification by conventions meant the people would be voting for convention delegates, who would in turn vote for ratification. This was tantamount to turning ratification into a popularity contest between convention delegates, rather than a democratically direct vote on the document, itself. Moreover, ratification by convention would present the possibility that a minority of the people in a state (those in favor of the Constitution) might “pack” a convention with delegates, who would then approve of a document establishing a government for all.

Electoral shenanigans were not just hypothetical possibilities. In Philadelphia, for example, a mob kidnapped elected legislators who were boycotting a convention vote, physically dragged them into the state house, and tied them to their chairs in order to force a convention vote. Other, more subtle methods of manipulation occurred elsewhere, notably the disenfranchisement of voters through property qualifications.

Over a hundred years ago, Charles A. Beard completed his exhaustive study of the Constitution and confirmed that it most likely was ratified by a majority – of a minority of the people.

Among Beard’s final conclusions were these: “The Constitution was ratified by a vote of probably not more than one-sixth of the adult males….The leaders who supported the Constitution in the ratifying conventions represented the same economic groups as the members of the Philadelphia Convention….The Constitution was not created by ‘the whole people’ as the jurists [judges] have said; neither was it created by ‘the states’ as Southern nullifiers long contended; but it was the work of a consolidated group whose interests knew no state boundaries and were truly national in their scope.”

The Deep State, in other words. It was darkly appropriate that a document whose primary purpose was to defeat democratic rule was, itself, brought into force without a majoritarian vote.

In 1788, nine of the 13 states’ conventions ratified the Constitution (as specified in the Constitution’s own Article VII) and the document became the supreme law of the land for those nine states. By 1789, even the democratic holdout Rhode Island had followed suit. And America’s schoolchildren have been led to believe ever since that the Constitution is a sacred document, inspired and ordained by the public-spirited benevolence of Founding Fathers.

But this had been predicted. It had seemed painfully obvious to Eighteenth Century Genevan political philosopher Jean-Jacques Rousseau that constitutional government was the invention of the Deep State, its designated beneficiary.

Dripping with sarcasm, his virtuoso Discourse on Inequality explained the process: “[T]he rich man … at last conceived the deepest project that ever entered the human mind: this was to employ in his favour the very forces that attacked him, to make allies of his enemies…

“In a word, instead of turning our forces against ourselves, let us collect them into a sovereign power, which may govern us by wise laws, may protect and defend all the members of the association, repel common enemies, and maintain a perpetual concord and harmony among us.”

Rousseau penned these words in 1754, 33 years before Gouverneur Morris oversaw the drafting of the identical sales pitch that constitutes the Preamble to the United States Constitution: “We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America.”

Rousseau concludes: “All offered their necks to the yoke in hopes of securing their liberty; for though they had sense enough to perceive the advantages of a political constitution, they had not experience enough to see beforehand the dangers of it; those among them, who were best qualified to foresee abuses, were precisely those who expected to benefit by them….” [Emphasis added]

Does the Deep State pose an existential threat to American democracy today? Move along, folks – nothing new to see here.

Jada Thacker, Ed. D, is the author of Dissecting American History: A Theme-Based Narrative. He teaches History and Government at a college in Texas. Contact: jadathacker@sbcglobal.net




NATO as an ‘Entangling Alliance’

There are many ugly aspects of Donald Trump’s candidacy, but Trump raises a legitimate question about the value of NATO, which represents the epitome of the “entangling alliances” that the Founders warned against, notes Ivan Eland.

By Ivan Eland

With populism running wild in Europe and in the United States — the Brexit and American presidential candidate Donald Trump questioning U.S. alliances being just two obvious examples — suddenly people are asking the big questions about the future of Western institutions that should have been asked after the Cold War ended.

Both the Brexit and Donald Trump seem to be driven by a nativist element, but that doesn’t diminish the value of the implicit questions that they are posing. Americans should listen to Donald Trump, while examining the Brexit, and ask themselves if the United States shouldn’t withdraw from NATO and other military alliances.

Of course, such a U.S. withdrawal would be much more consequential for NATO and other U.S. alliances than is the Brexit for the European Union. Britain is not even the largest economy in the E.U. The United States accounts for three-quarters of the defense spending of NATO countries, and it is very unlikely that those allies — all much closer to zones of conflict than is the United States — will be defending the superpower rather than vice versa.

Since World War II, the United States has provided security, formally or informally, for an ever-widening number of ever more prosperous nations in Europe and East Asia, but has gotten few commercial or other considerations in return. Many of these nations or blocs have not ever fully opened their markets to U.S. trade, finance, and investment.

Such one-sided alliances were justified by American elites and the foreign beneficiaries of such security welfare as being in the American interest too. Really?

George Washington, who preferred neutrality as a foreign policy, warned against the United States forming “permanent alliances,” and Thomas Jefferson cautioned against getting bogged down in “entangling alliances.” In fact, Jefferson wrote in 1799, “I am for free commerce with all nations, political connection with none, and little or no diplomatic establishment. And I am not for linking ourselves by new treaties with the quarrels of Europe.”

Outdated Warning?

But times have changed, right? Rapid advances in communication and transportation have led to a more interdependent world, which compels the United States, as an exceptional nation in world history, to monitor disturbances in faraway and even insignificant places, so that they don’t snowball into larger threats — for example, the rise of another Adolf Hitler to threaten Europe. Thus, shouldn’t the views of America’s founders on foreign policy go the way of the powdered wig?

No, the basic geography of the United States hasn’t changed from the time of the nation’s founders; they perceptively realized that the United States might just have the most favorable geography of any great power in world history. The United States has two large ocean moats and is far away from the zones of conflict in the world.

Today, the country actually might be even more secure than at the founding, because it no longer has foreign great powers prowling around its borders, but instead has weak and friendly neighbors, and now has the most capable nuclear arsenal on the planet — which should deter attacks, nuclear or conventional, from any nation with a home address vulnerable to cataclysmic retaliation.

As for interdependence, in the security realm, the advent of the nuclear age may have actually made the world less so; cross-border aggression — conflicts that have a greater potential to adversely affect U.S. security than do foreign internal civil wars — has dropped significantly in the post-World War II era.

Alliances are not ends in themselves; they are used by countries to increase their security by banding together against foreign threats. Yet, after World War II, the United States began to acquire the first permanent alliances in its history just when it began not to need them — it had just developed nuclear weapons and ever since has been the leader in such technology.

But what about guarding against what a future Adolf Hitler or Joseph Stain could do in Europe? Ever since World War II, America’s overly interventionist foreign policy has been based on avoiding another Munich 1938 disaster, when British Prime Minister Neville Chamberlain appeased Hitler, instead of confronting him, thus emboldening an attempted German takeover of Europe.

However, such a limited reading of history self-servingly absolves the United States (and Britain and France) from having created the Hitlerian monster in the first place. The United States entered World War I, tipped the balance to British and French allies that simply wanted to greedily expand their empires, declared that the Germans were guilty of starting the war, imposed harsh financial reparations on Germany that helped cause the bad economic conditions that brought Hitler to power, and demanded the abdication of the German king, thus clearing the way for Hitler’s rise and World War II.

One other important lesson from World War I is that alliances — even informal ones, such as the one Britain had with France and the biased U.S. “neutrality” of U.S. arms sales and financing credits sent to Britain but not Germany — can impede flexibility and drag countries into wars they don’t want. No one country desired World War I, but such webs made it spread and engulf the entire continent and beyond. And World War II was just World War I, Part II.

So with the Brexit and the Trump candidacy leading to an examination of the big questions, maybe the United States should ask whether its expensive alliances are really needed for security or are just to maintain an entangling and costly world empire based on vanity. Perhaps an Amerexit from them is in order.

Ivan Eland is senior fellow and director of the Center on Peace & Liberty at the Independent Institute, Oakland, CA, and the author of Recarving Rushmore: Ranking the Presidents on Peace, Prosperity, and Liberty.




Dissing George Washington for Reagan

Exclusive: Was Ronald Reagan a greater American leader than George Washington? That is the impression one gets when historic “Washington National Airport” is redubbed “Reagan National.” Are Americans really that anti-historical to have forgotten Washington’s significance, asks Robert Parry on the first President’s 284th birthday.

By Robert Parry

Arguably, George Washington was the one indispensible American. He was commander-in-chief during the American Revolution holding the embattled Continental Army together sometimes by his sheer force of will; at another key turning point, he presided at the Constitutional Convention giving the nation its governing framework; he then served as the first President placing his personal stamp of legitimacy on the fragile, young Republic.

While other Founders played important historical roles John Adams organizing the Revolution, James Madison devising the Constitution, Alexander Hamilton giving substance to the new federal government, etc. it was Washington whose temperament and stature made the entire experiment work.

Later, other American leaders stepped forward to guide the nation through grave crises, such as Abraham Lincoln in the Civil War and Franklin Roosevelt during the Great Depression and World War II, but Washington was truly the Father of the Country giving the nation life on battlefields up and down the length of the Thirteen Colonies, inside the contentious Constitutional Convention in Philadelphia, and in the establishment of a truly unified nation by serving two terms as the first President.

Surely, Washington was not a person without flaws and contradictions, but without him it is hard to imagine what would have happened to the American colonies in the late 1770s or assuming that independence was won to the squabbling states under the ineffectual Articles of Confederation in the 1780s.

At every key turning point in those early years, Washington was there sacrificing for the new nation. He suffered with his troops at Valley Forge; he collaborated with Madison and Hamilton overcoming the national disunion that followed military victory; he agreed to leave his beloved Mount Vernon to serve as the first U.S. President of the United States but then retired after two terms showing that no one person was bigger than “We the People” enshrined in the Constitution’s Preamble.

So, it is fitting that Americans honor this great early leader of the American Republic. But what is odd and to me troubling is the ahistorical attitude that essentially expunges Washington’s name from what had been “Washington National Airport” to rename it, in effect, “Reagan National” or simply “Reagan.”

Whatever one thinks of Ronald Reagan and I rate him one of America’s worst presidents for his profligate fiscal policies, his excessive militarism, his atrocious actions on human rights and his contempt for the Constitution as demonstrated by the Iran-Contra scandal it is hard to believe that even dyed-in-the-wool Republicans and conservatives would rate Reagan as a greater president than George Washington.

Yet, since Reagan’s name was shoehorned into the airport’s title after Republicans seized control of Congress in 1995 a change signed into law by Democratic President Bill Clinton many U.S. airlines have dropped Washington’s name altogether when referring to what had long been “Washington National Airport.” It’s now referred to commonly as “Reagan National” or “Reagan.”

Bizarre and Confusing

Beyond the bizarre suggestion that Ronald Reagan was a more important historical figure than George Washington, there is the practical concern that many people visiting Washington D.C. find it confusing that its airport, which was once named after Washington (who incidentally lived close by at Mount Vernon), is now identified as “Reagan,” who spent most of his adult life in California and only lived in Washington during his time in the White House (and then only when he wasn’t vacationing back in California).

So, doesn’t it make more sense both historically and practically to again refer to National Airport as “Washington National”? And, even if it is currently politically impossible to restore the traditional name given how the reversion would infuriate many Republicans can’t we, as a flying public, demand that the airlines go back to combining Washington and National rather than demeaning America’s Founding Father by dropping him in favor of Ronald Reagan?

I know my suggestion may be deemed petty by some and quixotic by others, but there is meaning in historical symbolism. That is, after all, why Republicans insisted on elbowing Washington aside in the first place and elevating their recent hero Reagan to such an august position.

But are Americans so historically ignorant that we actually believe that Ronald Reagan was a more important figure in our national existence than George Washington? Do Americans really not know or appreciate how the Republic was created? Are the heroic sacrifices of Washington and his Continental Army so forgotten and disrespected that an actor-turned-politician is given top billing? Do conservatives who call themselves “constitutionalists” have so little regard for Washington and others who crafted the actual Constitution that they relegate them to a subordinate position? Have Republicans forgotten how the Republic got started and who started it?

Really! Regardless of our political persuasions, don’t we care enough about America’s Founders and America’s Founding to tell the commercial airline companies that it’s not “Reagan National,” it’s “Washington National”!

Investigative reporter Robert Parry broke many of the Iran-Contra stories for The Associated Press and Newsweek in the 1980s. You can buy his latest book, America’s Stolen Narrative, either in print here or as an e-book (from Amazon and barnesandnoble.com).




How Scalia Distorts the Framers

From the Archive: The late Supreme Court Justice Scalia put his right-wing ideology above any respect for the Constitution’s Framers, even resorting to a made-up view attributed to Alexander Hamilton in Scalia’s dissent to the landmark upholding of the Affordable Care Act, wrote Robert Parry in 2012.

By Robert Parry (Originally published on July 4, 2012)

Antonin Scalia and the three other right-wing justices who sought to strike down health-care reform cited no less an authority on the Constitution than one of its key Framers, Alexander Hamilton, as supporting their concern about the overreach of Congress in regulating commerce.

In their angry dissent on June 28, 2012, the four wrote: “If Congress can reach out and command even those furthest removed from an interstate market to participate in the market, then the Commerce Clause becomes a font of unlimited power, or in Hamilton’s words, ‘the hideous monster whose devouring jaws  . . .  spare neither sex nor age, nor high nor low, nor sacred nor pro­fane.’” They footnoted Hamilton’s Federalist Paper No. 33.

That sounds pretty authoritative, doesn’t it? Here’s Hamilton, one of the strongest advocates for the Constitution, offering a prescient warning about “Obamacare” from the distant past of 1788. Except that Scalia and his cohorts are misleading you. In effect, they turned Hamilton’s observation inside out.

In Federalist Paper No. 33, Hamilton was not writing about the Commerce Clause. He was referring to clauses in the Constitution that grant Congress the power to make laws that are “necessary and proper” for executing its powers and that establish federal law as “the supreme law of the land.”

Hamilton also wasn’t condemning those powers, as Scalia and his friends would have you believe. Hamilton was defending the two clauses by poking fun at the Anti-Federalist alarmists who had stirred up opposition to the Constitution with warnings about how it would trample America’s liberties. In the cited section of No. 33, Hamilton is saying the two clauses had been unfairly targeted by “virulent invective and petulant declamation.”

It is in that context that Hamilton complains that the two clauses “have been held up to the people in all the exaggerated colors of misrepresentation as the pernicious engines by which their local governments were to be destroyed and their liberties exterminated; as the hideous monster whose devouring jaws would spare neither sex nor age, nor high nor low, nor sacred nor profane.”

In other words, Scalia and the three other right-wingers did not only apply Hamilton’s comments to the wrong section of the Constitution but reversed their meaning. Hamilton was mocking those who were claiming that these clauses would be “the hideous monster.”

Twisting the Framers

It is ironic indeed that Hamilton’s words, countering alarmist warnings from his era’s conservatives, would be distorted by this era’s conservatives to spread new alarms about the powers of the Constitution.

Scalia’s distortion also underscores a larger tendency on the Right to fabricate a false founding narrative that transforms key advocates for a strong central government the likes of Alexander Hamilton and James Madison into their opposites, all the better to fit with the Tea Party’s fictional storyline.

Of course, Scalia’s deception would be an easy sell to typical Tea Party advocates, whose certainty about their made-up history would be reinforced as they pretend to stand with the Framers, complete with tri-corner hats from costume shops and bright-yellow “Don’t Tread on Me” flags.

Indeed, the Scalia-authored dissent reads more like a Tea Party manifesto than a carefully reasoned legal argument. The dissent sees the Affordable Care Act, which seeks to impose some rationality on America’s chaotic health-insurance system, as a step toward a despotic scheme that would “make mere breathing in and out the basis for federal prescription and to extend federal power to virtually all human activity.”

Some Supreme Court watchers even suspect that it may have been Scalia’s intemperate tone that pushed Chief Justice John Roberts from a position of initially rejecting the Affordable Care Act outright as an unconstitutional use of the Commerce Clause to supporting its constitutionality under congressional taxing powers.

The four more liberal justices endorsed the law’s constitutionality under the Commerce Clause but also joined with Roberts on his tax conclusion, thus upholding the law and sending Scalia and his three right-wing cohorts Anthony Kennedy, Clarence Thomas and Samuel Alito into a further paroxysm of rage.

What becomes clear in reading the dissent is that not only do the right-wing justices misrepresent the views of the Framers regarding the Commerce Clause, these justices misunderstand a central reality of why the Framers wrote the Constitution in 1787. The Framers junked the states-rights-oriented Articles of Confederation in favor of the Constitution because they wanted to solve the nation’s problems.

Founding Pragmatists

Led by James Madison and George Washington, the drafters of the Constitution crafted a profoundly pragmatic document, filled not only with political compromises to pull together the 13 squabbling states but looking for practical solutions to address the challenges of a new, sprawling and disparate nation.

The Commerce Clause, which grants Congress the power to regulate interstate commerce, was not some afterthought but rather one of Madison’s most cherished ideas, as Justice Ruth Bader Ginsburg noted in her opinion on behalf of the Court’s four more liberal members.

Citing a 1983 ruling entitled EEOC v. Wyoming, Ginsburg noted that “the Commerce Clause, it is widely acknowledged, ‘was the Framers’ response to the central problem that gave rise to the Constitution itself.’”

That problem was a lack of national coordination on economic strategy, which hindered the country’s development and made the nation more vulnerable to commercial exploitation by European powers, which looked to divide and weaken the newly independent United States.

Ginsburg wrote: “Under the Articles of Confederation, the Constitution’s precursor, the regulation of commerce was left to the States. This scheme proved unworkable, because the individual States, understandably focused on their own economic interests, often failed to take actions critical to the success of the Nation as a whole.”

The Articles of Confederation, which governed the country from 1777 to 1787, had explicitly asserted the “independence” and “sovereignty” of the 13 individual states, making the central government essentially a supplicant to the states for necessary financial support.

After watching the Continental Army suffer when the states reneged on promised funds, General Washington felt a visceral contempt for the concept of sovereign and independent states. He became a strong supporter of Madison’s idea of a stronger central government, including one with the power to regulate commerce.

In 1785, Madison proposed a Commerce Clause as an amendment to the Articles, with Washington’s strong support. “We are either a united people, or we are not,” Washington wrote. “If the former, let us, in all matters of a general concern, act as a nation which have national objects to promote, and a national character to support. If we are not, let us no longer act a farce by pretending it to be.”

Alexander Hamilton, who had served as Washington’s chief of staff in the Continental Army, explained the commerce problem this way: “[Often] it would be beneficial to all the states to encourage, or suppress, a particular branch of trade, while it would be detrimental . . . to attempt it without the concurrence of the rest.”

Madison himself wrote, regarding the failings of the Articles, that as a result of the “want of concert in matters where common interest requires it,” the “national dignity, interest, and revenue [have] suffered.”

However, Madison’s commerce amendment failed in the Virginia legislature. That led him to seek an even more radical solution scrapping the Articles altogether and replacing them with a new structure with a powerful central government whose laws would be supreme and whose powers would extend to coordinating a strategy of national commerce.

Building the Framework

As Madison explained to fellow Virginian Edmund Randolph in a letter of April 8, 1787, as members of the Constitutional Convention were gathering in Philadelphia, what was needed was a “national Government . . . armed with a positive & compleat authority in all cases where uniform measures are necessary.”

On May 29, 1787, the first day of substantive debate at the Constitutional Convention, it fell to Randolph to present Madison’s framework. The Commerce Clause was there from the start.

Madison’s convention notes on Randolph’s presentation recount him saying that “there were many advantages, which the U. S. might acquire, which were not attainable under the confederation such as a productive impost [or tax] counteraction of the commercial regulations of other nations pushing of commerce ad libitum &c &c.”

In other words, the Founders at their most “originalist” moment understood the value of the federal government taking action to negate the commercial advantages of other countries and to take steps for “pushing of [American] commerce.” The “ad libitum &c &c” notation suggests that Randolph provided other examples off the top of his head.

Historian Bill Chapman has summarized Randolph’s point as saying “we needed a government that could co-ordinate commerce in order to compete effectively with other nations.”

So, from the very start of the debate on a new Constitution, Madison and other key Framers recognized that a legitimate role of the U.S. Congress was to ensure that the nation could match up against other countries economically and could address problems impeding the nation’s economic strength and welfare.

This pragmatism imbued Madison’s overall structure even as he included intricate checks and balances to prevent any one branch of government from growing too dominant. The final product also reflected compromises between the large and small states over representation and between Northern and Southern states over slavery, but Madison’s Commerce Clause survived as one of the Constitution’s most important features.

However, the Constitution’s dramatic transfer of power from the states to the central government provoked a furious reaction from supporters of states’ rights. The Articles’ phrasing about state “sovereignty” and “independence” had been removed entirely, replaced with language making federal law supreme.

The Anti-Federalists recognized what had happened. As dissidents from the Pennsylvania delegation wrote: “We dissent because the powers vested in Congress by this constitution, must necessarily annihilate and absorb the legislative, executive, and judicial powers of the several states, and produce from their ruins one consolidated government.”

Winning Ratification

As resistance to Madison’s federal power-grab spread and as states elected delegates to ratifying conventions Madison feared that his constitutional masterwork would go down to defeat or be subjected to a second convention that might remove important federal powers like the Commerce Clause.

So, Madison along with Alexander Hamilton and John Jay began a series of essays, called the Federalist Papers, designed to counter the fierce attacks by the Anti-Federalists against the broad assertion of federal power in the Constitution.

Madison’s strategy was essentially to insist that the drastic changes contained in the Constitution were not all that drastic, an approach he took both as a delegate to the Virginia ratifying convention and in the Federalist Papers. But Madison also touted the advantages of the Constitution and especially the Commerce Clause.

For instance, in Federalist Paper No. 14, Madison envisioned major construction projects under the powers granted by the Commerce Clause. “[T]he union will be daily facilitated by new improvements,” Madison wrote. “Roads will everywhere be shortened, and kept in better order; accommodations for travelers will be multiplied and meliorated; an interior navigation on our eastern side will be opened throughout, or nearly throughout the whole extent of the Thirteen States.

“The communication between the western and Atlantic districts, and between different parts of each, will be rendered more and more easy by those numerous canals with which the beneficence of nature has intersected our country, and which art finds it so little difficult to connect and complete.”

While ignoring Federalist Paper No. 14, today’s right-wingers are fond of noting Madison’s Federalist Paper No. 45, in which he tries to play down how radical a transformation, from state to federal power, he had engineered in the Constitution.

Rather than view this essay in context Madison finessing the opposition the modern Right seizes on Madison’s rhetorical efforts to deflect the Anti-Federalist attacks by claiming that some of the Constitution’s federal powers were contained in the Articles of Confederation, albeit in far weaker form.

In Federalist Paper No. 45, entitled “The Alleged Danger From the Powers of the Union to the State Governments Considered,” Madison wrote: “If the new Constitution be examined with accuracy, it will be found that the change which it proposes consists much less in the addition of NEW POWERS to the Union, than in the invigoration of its ORIGINAL POWERS.”

Today’s Right also trumpets Madison’s summation, that “the powers delegated by the proposed Constitution to the federal government are few and defined. Those which are to remain in the State governments are numerous and indefinite.”

But the Right generally ignores another part of No. 45, in which Madison writes: “The regulation of commerce, it is true, is a new power; but that seems to be an addition which few oppose, and from which no apprehensions are entertained.”

In his ruling joining with his fellow right-wing justices in rejecting the application of the Commerce Clause to the Affordable Care Act Chief Justice Roberts does mention that line from Federalist Paper No. 45. However, he spins Madison’s meaning into a suggestion that the Commerce Clause should never contribute to any controversy.

Looking to the Future

However, what Madison’s comments about the Commerce Clause actually demonstrated was a core reality about the Framers that, by and large, they were practical men seeking to build a strong and unified nation. They also viewed the Constitution as a flexible document designed to meet America’s ever-changing needs, not simply the challenges of the late Eighteenth Century.

As Hamilton wrote in Federalist Paper No. 34, “we must bear in mind that we are not to confine our view to the present period, but to look forward to remote futurity. Constitutions of civil government are not to be framed upon a calculation of existing exigencies, but upon a combination of these with the probable exigencies of ages, according to the natural and tried course of human affairs.

“Nothing, therefore, can be more fallacious than to infer the extent of any power, proper to be lodged in the national government, from an estimate of its immediate necessities. There ought to be a CAPACITY to provide for future contingencies as they may happen; and as these are illimitable in their nature, it is impossible safely to limit that capacity.”

Indeed, the Commerce Clause was a principal power that Madison crafted to deal with commercial challenges both current to his time and future ones that could not be anticipated by his contemporaries. There also was a reason why the Framers made the power to regulate interstate commerce unlimited. They wanted to invest in the elected representatives the United States the ability to solve future problems.

In Madison’s day, the nation’s challenges included the need for canals and roads that would move goods to market and enable settlers to travel westward into lands that European powers also coveted. Always a principal concern was how European competition could undermine the hard-won independence of the nation.

Though the Framers could not have envisioned the commercial challenges of the modern world, American businesses remain under intense foreign competition today, in part, because of an inefficient health-care system that imposes on U.S. businesses the cost of health insurance that drives up the price of American goods.

Under the current system, not only do many American businesses pay for their employees’ health care while most other developed nations pay medical bills through general taxation but U.S. companies indirectly pick up the cost of the uninsured who get emergency care and don’t pay.

So, a law that makes American businesses more competitive by addressing this “free-rider” problem and by assuring a healthier work force would seem to be right down the middle of the Framers’ intent in drafting the Commerce Clause.

No Practicality

In contrasting Justice Ginsburg’s opinion on the Affordable Care Act with Scalia’s dissent, one of the most striking differences is how the Framers are understood: Ginsburg sees them as pragmatic problem-solvers, while Scalia envisions them as rigid ideologues placing individual freedom above practical goals.

The core of the Scalia-written dissent is that the Constitution is NOT about solving problems, but rather following the most crimped interpretation of the words. Indeed, he ridicules Ginsburg for viewing the founding document as implicitly intended to give the elected branches of government the flexibility to address national challenges.

Yet, there was little question from either side that virtually every American participates in the commerce of health care from birth to death and that the health-insurance mandate in the Affordable Care Act was intended by Congress to regulate what is clearly a national market.

In the dissent, the four right-wing justices acknowledged that “Congress has set out to remedy the problem that the best health care is beyond the reach of many Americans who cannot afford it. It can assuredly do that, by exercis­ing the powers accorded to it under the Constitution. The question in this case, however, is whether the complex structures and provisions of the Affordable Care Act go beyond those powers. We conclude that they do.”

Scalia noted that Ginsburg “treats the Constitution as though it is an enumeration of those problems that the Federal Government can ad­dress, among which, it finds, is ‘the Nation’s course in the economic and social welfare realm,’ and more specifically ‘the problem of the uninsured.’

“The Constitution is not that. It enumerates not federally soluble problems, but federally available powers. The Federal Government can address whatever problems it wants but can bring to their solution only those powers that the Constitution confers, among which is the power to regulate commerce. None of our cases say anything else. Article I contains no whatever-it-takes-to-solve-a-national-­problem power.”

The right-wing justices insisted that the power to “regulate” commerce couldn’t possibly cover something like a mandate to buy health insurance.

Chief Justice Roberts in his own opinion, which rejected use of the Commerce Clause but then justified the Affordable Care Act under the Constitution’s taxing powers  decided that some of the definitions of the word “regulate” couldn’t be applied because they were not the first definitions in the dictionaries of the late Eighteenth Century.

However, in an earlier opinion upholding the Affordable Care Act, conservative U.S. Appeals Court Judge Laurence Silberman noted that “At the time the Constitution was fashioned, to ‘regulate’ meant, as it does now, ‘[t]o adjust by rule or method,’ as well as ‘[t]o direct.’ To ‘direct,’ in turn, included ‘[t]o prescribe certain measure[s]; to mark out a certain course,’ and ‘[t]o order; to command.’

“In other words, to ‘regulate’ can mean to require action, and nothing in the definition appears to limit that power only to those already active in relation to an interstate market. Nor was the term ‘commerce’ limited to only existing commerce. There is therefore no textual support for appellants’ argument” that mandating the purchase of health insurance is unconstitutional.

However, in Roberts’s ruling, the Chief Justice threw out certain definitions for “regulate”, such as “[t]o order; to command”, saying they were not among the top definitions in the dictionaries of the time. Roberts wrote, “It is unlikely that the Framers had such an obscure meaning in mind when they used the word ‘regulate.’”

Needing Health Care

Scalia and Roberts also adopted a very narrow concept of participation in the health-care industry. Though it’s undeniable that virtually all Americans from birth to death receive medical care of various types and at different times, the Court’s five right-wing justices treated the gaps between those events as meaning people are no longer in the health market.

Roberts wrote: “An individual who bought a car two years ago and may buy another in the future is not ‘active in the car market’ in any pertinent sense. The phrase ‘active in the market’ cannot obscure the fact that most of those regulated by the individual mandate are not currently engaged in any commercial activity involving health care, and that fact is fatal to the Government’s effort to ‘regulate the uninsured as a class.’”

But, as Ginsburg noted in her opinion, this comparison is off-point, because a person can plan for the purchase of a car but often is thrust into the medical industry by an accident or an unexpected illness.

Over and over again, the five right-wing justices behaved as if they started out with a determination to reject a constitutional justification under the Commerce Clause and then dreamt up legal wording to surround their preconceived conclusion. In doing so, they treated the Constitution as some finicky legal document rather than what the Framers had intended, a vibrant structure for solving national problems.

And, as for the Framers’ views regarding mandating American citizens to buy a private product, one can get a good idea of their attitude by examining the actions of the Second Congress in passing the Militia Acts, which mandated that every white male of military age buy a musket and related supplies. That Congress included actual Founders, such as James Madison. The law was signed by George Washington, another Founder. [See Consortiumnews.com’s “The Founders’ Musket Mandate.”]

So, despite what today’s Right wants you to believe, the Framers were not hostile to a strong central government; they were not big advocates of states’ rights; they were not impractical ideologues contemplating their navels or insisting on some hair-splitting interpretation of their constitutional phrasing.

Rather, they were pragmatic individuals trying to build a nation. They wrote the Constitution specifically so the country could address its pressing problems and match up competitively with America’s foreign rivals. Since Justices Scalia, Kennedy, Thomas and Alito don’t have this real history on their side, they apparently saw little option but to make up their own.

Investigative reporter Robert Parry broke many of the Iran-Contra stories for The Associated Press and Newsweek in the 1980s. You can buy his latest book, America’s Stolen Narrative, either in print here or as an e-book (from Amazon and barnesandnoble.com).




Toward a Rational US Strategy (Part 2)

Special Report: The ultimate madness of today’s U.S. foreign policy is Official Washington’s eager embrace of a new Cold War against Russia with the potential for nuclear annihilation. A rational strategy would seek alternatives to this return to big-power confrontation, writes ex-U.S. diplomat William R. Polk.

By William R. Polk

In Part One, I dealt at length with America’s relationship with “Lesser” or “Third World” powers because that is where we have been most active since the Second World War. I now turn to America’s postwar rivalry with the other “Great” power, the Soviet Union, and offer some thoughts on our growing relationship with China.

For more than half a century, we and the Soviet Union were locked in the Cold War. During that time we were often on the brink of Hot War. We organized ourselves to fight it if necessary but we also created political alliances, economies and politico-military structures with the announced aim of avoiding war.

Thus we built such organizations as NATO, CENTO and SEATO, stationed much of our army abroad and manned thousands of bases around the world. We also recast much of our economy into the “military-industrial complex” to supply our overseas ventures.

Inevitably our efforts in foreign affairs upset traditional balances within our society. It is beyond my purpose here to describe the growth of “the National Security State” since the 1947 acts that established the governmental organs and profoundly altered universities, businesses and civic groups.  Here I focus on the strategy that grew out of the Cold War and which is now returning to dominate our thought and action on China and shaping our action on the emerging alliance of China and Russia.

With shows of military force adjacent to major Russian bases, we have returned to the confrontation that marked the most dangerous Cold War episodes. [See The New York Times, Eric Schmitt & Steven Myers, “U.S. Is Poised to Put Heavy Weaponry in Eastern Europe,” and The Guardian, Ewen MacAskill, “Nato shows its teeth to Russia with elaborate Baltic training exercise.”].

The Cold War divided as much of the world as either the U.S. or USSR could control into what Nineteenth Century statesmen called “spheres of influence.” Both great powers used their military, financial, commercial, diplomatic and ideological power to dominate their “blocs.” Since neither side could establish precise and stable frontiers, each power built real or notional “walls” around its sphere, each probed into the sphere of the other and both competed for the favor of the uncommitted.

Spheres of influence, as earlier statesmen had discovered, require careful maintenance, are unstable and do not preclude hostilities. They are not a substitute for peace or security, but sometimes they have seemed to statesmen the most advantageous ways to manage foreign relations. It was the attempt to make the Soviet-American “frontier” more stable and lessen the chance of war that was the contribution of the preëminent American strategist, George Kennan.

Hedgehog vs. Fox

George Kennan personified the hedgehog in an ancient Greek poem on the difference between the wise hedgehog and the cunning fox. Like the hedgehog, Kennan had one big idea “containment,” the strategy of the Cold War while all around him the “foxes” were chasing and arguing over tactics.

Kennan’s idea was that the Soviet drive for aggrandizement could be contained long enough that the state could evolve. Most of the foxes thought that the USSR should be “rolled back” and devised military means to do it. Some of them were prepared to go to nuclear war to accomplish that objective.

These were obviously major differences, but what is less obvious is that both Kennan and his critics thought of what they were doing as war: Kennan wanted it to be “colder” than the foxes, but he was prepared to engage in (and indeed personally designed and helped to implement) a variety of espionage “dirty tricks” that pushed relations between the U.S. and USSR close to “hot” war. Both he and the foxes aimed at American dominance.

When Kennan elaborated his ideas on containment rather than military conflict first in his 1946 Secret “Long Telegram” from Moscow and then anonymously in “The Sources of Soviet Conduct” in the July 1947 issue of Foreign Affairs, they were considered heresy.  The then “dean” of Washington columnists, Walter Lippmann, wrote a series of articles attacking them. [Originally in New York Herald Tribune, his articles then appeared in book form as The Cold War: A Study in U.S. Foreign Policy (1947).]

Lippmann and the growing number of “big bomb” enthusiasts in government-funded “think tanks,” thought Kennan failed to understand the fundamental evil of the Soviet system and so was gambling with American security. The only answer, they felt, was military superiority.

Military superiority was the central idea in what became a long series of U.S. national policy statements. (The latest being the February 2015 “National Security Strategy” of President Obama.) The first, and most influential, statement of it was “NSC 68” which  was written by Kennan’s successor as director of the Policy Planning Staff (as it was then known), Paul Nitze, and adopted by President Harry Truman as official policy. It called for a massive build-up of both conventional and nuclear arms.

Nitze castigated Kennan, writing, “Without superior aggregate military strength, in being and readily mobilizable, a policy of ‘containment’ which is in effect a policy of calculated and gradual coercion is no more than a policy of bluff.”

McGeorge Bundy later commented in Danger and Survival, “NSC 68 took the gloomiest possible view of the prospect of any agreed and verifiable bilateral limitation” on weapons. It also “explicitly considered and rejected the proposal that George Kennan had put forward for a policy [of] no first use of nuclear weapons.” [On Kennan’s and Nitze’s complex relationship reminiscent of that of Thomas Jefferson and Alexander Hamilton — see Nicholas Thompson’s The Hawk and the Dove (2009).]

NSC 68 provoked a massive Soviet nuclear weapons development. It also set off a limited (but then muted) debate within the American government. Willard Thorp, a noted government economist who had helped draft the Marshall Plan, pointed out that as measured by such criteria as the production of steel the total strength of the U.S. was about four times that of the USSR and that the current “gap is widening in our favor.” In effect, he was saying the Cold War was mostly hype. [Willard Thorp. Memorandum to the Secretary of State: “Draft Report to the President,” April 5, 1950].

Threatening War

More wide-ranging was the critique of William Schaub, a senior official in the Bureau of the Budget. In a memorandum to the NSC, dated May 8, 1950, he pointed out that the almost exclusive military emphasis of NSC 68 would “be tantamount to notifying Russia that we intended to press war in the near future.”

Moreover, he wrote, the policy “vastly underplays the role of economic and social change as a factor in the ‘the underlying conflict.” And, as a result of our focus on the Soviet threat, “We are being increasingly forced into associations [with Third World regimes] which are exceedingly strange for a people of our heritage and ideals.”

So it was that Kennan, Lippmann, Nitze, Thorp and Schaub opened the door on the issue that would engage policymakers for the next half century.  And dozens of would-be strategists rushed to enter.

But, before NSC 68 could be seriously discussed, on June 25, 1950, North Korean military forces crossed the 38th parallel and invaded South Korea. As Secretary of State Dean Acheson later remarked, Korea preëmpted discussion on American strategy. The argument over containment and superiority never ceased.

Discussion on American strategy, actually, had already been preëmpted. America had the bomb and most of the “Wise Men” (a term coined by McGeorge Bundy for the Cold War foreign policy “Establishment”) in the upper reaches of government thought that threat of its use was the bedrock of American security because, as the American army faded away in 1945, it was evident that the Russians had overwhelming power in conventional forces. In military terms, the Cold War was already staked out.

The Cold War created a “need” for intelligence. From 1946, the U.S. Air Force was monitoring the borders of the USSR and its satellites. At first the Joint Chiefs of Staff opposed mounting probes, and the Soviet Union protested them. A compromise was reached with an implicit U.S.-USSR “gentleman’s agreement” that restricted flights to no closer than 40 miles from borders.

Then in 1949 the Soviet Union exploded its first nuclear device and in November 1950 Chinese forces entered Korea. On Dec. 16, 1950, President Truman declared a state of National Emergency. Suddenly, gathering intelligence on Soviet capabilities, particularly on the presumed ability of the Soviet air force to attack the United States across Alaska, became insistent.

Truman immediately approved aerial penetrations of Siberia. The US had just acquired a new relatively fast, high-flying bomber, the B-47, that could be modified for the task. That was the first step in a lengthy game in which both Russian and American fighter planes intercepted, followed, photographed but usually did not attempt to shoot down each other’s reconnaissance aircraft.

Usually, but not always. The first armed clash came, apparently, in 1949. In the following 11 years a dozen or more U.S. aircraft were shot down or crashed in or near the USSR.  Neither side admitted their existence. Keen on “deniability,” and so to avoid serious conflict, President Eisenhower asked the British to perform the mission.

But finally, the CIA ordered a new aircraft, the Lockheed jet-powered glider, the U-2, and had it flown by CIA pilots. It was the CIA contract pilot Gary Powers who flew the U-2 that was brought down over the USSR on May 1, 1960.

It was because of the U-2 and related communications intelligence that the United States developed its close relationships with Turkey and Pakistan.  The relationship with Pakistan set the conditions for American aid and incidentally determined the relationship with India.  Without Congressional authorization, the CIA had entered into a deal with the government of Pakistan to create a base for the U-2 to overfly the USSR. [The National Security Archive, August 15, 2013, Jeffrey T. Richelson (ed.), “The Secret History of the U-2 and Area 51.”]

Each Side’s Fears  

At the time, Cold War strategy came into focus at the junction of Russian mass and American technology. Each side feared what the other side had and sought to counter it:  the Russians pushed their powerful land forces up to the line in Europe while the Americans built sophisticated weapons like the ICBM and multiple warheads.

Few then believed that a balance could be reached short of the capacity to obliterate the world. All eyes were on military issues. And, at least on the American side, the aim was to achieve security by military superiority. That was the strategic advice of such cold warriors” as Thomas Schelling, Henry Kissinger, Albert Wohlstetter and Herman Kahn. [For their writings at the center of the Cold War period, see Thomas C. Schelling, The Strategy of Conflict (1960), Herman Kahn, On Thermonuclear War (1960), Henry Kissinger, Nuclear Weapons and Foreign Policy (1969), Albert Wohlstetter, “The Delicate Balance of Terror,” Foreign Affairs 37, January 1959].

It took the Cuban Missile Crisis and the analyses of it that followed within the U.S. government to challenge the strategy of the Cold War. The crisis made clear that the quest for military superiority had reached a dead end. Pressing ahead with actions to overawe the Soviet Union were likely to destroy the entire world.

I have spelled out elsewhere the consequences of conflict, but since this is so important in any attempt to understand a conceivable American strategy and is, I fear, receding in memory, I will just mention here the key points:

Even the great advocate of thermonuclear weapons, Edward Teller, admitted that their use would “endanger the survival of man[kind].” The Russian nuclear scientist and Nobel Peace Prize laureate, Andrei Sakharov, laid out a view of the consequences in the Summer 1983 issue of Foreign Affairs as “a calamity of indescribable proportions.”

More detail was assembled by a scientific study group convened by Carl Sagan and reviewed by 100 scientists.  A graphic summary of their findings was published in the Winter 1983 issue of Foreign Affairs. Sagan pointed out that since both major nuclear powers had targeted cities, casualties could reasonably be estimated at between “several hundred million to 1.1 billion people” with an additional 1.1 billion people seriously injured.

Those figures related to the 1980s. Today, the cities have grown so the numbers would be far larger. Massive fires set off by the bombs would carry soot into the atmosphere, causing temperatures to fall to a level that would freeze ground to a depth of about 3 feet. Planting crops would be impossible and such food as was stored would probably be contaminated so the few survivors would starve.

The hundreds of millions of bodies of the dead could not be buried and would spread contagion. As the soot settled and the sun again became again visible, the destruction of the ozone layer would remove the protection from ultraviolet rays and so promote the mutation of pyrotoxins.

Diseases against which there were no immunities would spread. These would overwhelm not only the human survivors but, in the opinion of the expert panel of 40 distinguished biologists, would cause “species extinction” among both plants and animals. Indeed, there was a distinct possibility that “there might be no human survivors in the Northern Hemisphere … and the possibility of the extinction of Homo sapiens…”

The Missile Crisis solidified my disagreements on strategy with both Kennan and Nitze. From my participation in the crisis as one of the three members of the Crisis Management Committee, I became convinced that the “option” of military confrontation in the age of nuclear weapons and ICBMs was not realistic. Armed confrontation was suicide. And, the “strategy of conflict,” as laid out by Schelling, Kissinger, Wohlstetter and Kahn, was likely to cause it. That was the first conclusion.

My second conclusion was that both the “hedgehog” and the “foxes” that is both Kennan and the military-oriented strategists led by Nitze had misunderstood what caused war to actually break out. Because this may be absolutely crucial to avoiding stumbling into war, let me explain.

Basic to the American Cold War strategy was the belief that, regardless of the intelligence, politics or desire of whatever government it then had, in armed conflict America would be forced to fire its nuclear weapons because it did not have conventional forces adequate to stop an invading Russian army.

Knowing this, sensible Soviet leaders would “back off” from determined American challenges because they would realize that, as Schelling put it, “the option of nonfulfillment no longer exists.” Moreover, Schelling and the Cold Warriors believed that because the Russians knew that even a limited retaliation would lead to their destruction, America could engage in “limited” nuclear strikes. In the war game Schelling designed, this was the assumption.

All-Out Nuclear War 

In Schelling’s war game (to test what he had written in The Strategy of Conflict on limited war and reprisal) that was played out with access to all information the U.S. government had and involved only senior American officers, I was the political member of “Red Team.” The game was played in the Pentagon and was classified Top Secret. It was taken very seriously, as it should have been, by our senior officials.

In Schelling’s scenario, in a hypothecated crisis (following a coup in Iran) “Blue Team” obliterated Baku, killing about 200,000 people. How would Red Team respond? The chairman of our team, the then Chief of Naval Operations Admiral Anderson, playing Chairman Khrushchev, asked me to recommend our response.

I replied that I saw three options: first, play tit-for-tat, destroying, say, Dallas. Limited nuclear war enthusiasts would presumably then expect the American president to go on television and say, “Fellow Americans, I am sorry to have to report to you that if you had relatives in Dallas … they are gone. The Russians retaliated because we incinerated one of their cities. So now we’re even. Now we’ll just go back to the normal Cold War.'”

The team agreed that this was ridiculous. America would “re-retaliate”; the USSR would re-re-retaliate also and war would quickly become general. There was no stopping in a “limited war.”

The second option was to do nothing. Was this feasible? We agreed that it would certainly have led to a military coup d’état in which the Soviet leadership would have been shot as traitors. Knowing this, they would have been unlikely to adopt that move. Even if they did, and were overthrown, that would not stop retaliation: the coup leaders would strike back.

So there remained only one option:  general war. And only one feasible move: striking first with everything we had in the hope that we could disable our opponent. We signaled that we “fired” as many of Red Team’s notional 27,000 nuclear weapons as we could deliver.

Schelling was shocked. He stopped the game and scheduled a post-mortem to discuss how we had “misplayed.” The issue was serious, he said: if we were correct, he would have to give up the theory of deterrence, the very bedrock of the strategy of the Cold War. Why had we made such a foolish move?

In our meeting, I repeated our team’s analysis: I emphasized that the fault in his (and America’s) limited war strategy was that it failed to differentiate “interest of state” from “interest of government.” Schelling and American military planners assumed that they were the same. They were not.

It was obviously better for the Soviet Union not to engage in a nuclear exchange, but to appear to knuckle under to an American threat would be suicide for the leaders. Nikita Khrushchev’s backing down in the Missile Crisis was a rare and nearly fatal act of statesmanship. He could afford it for two key reasons: first, no missiles or other air strikes happened so that no Russians had to be avenged and, second, the Soviet civilian and military leaders all agreed (as they later confirmed to me when I lectured at the Institute of World Economy and International Affairs of the Soviet Academy) that they accepted the geostrategic reality: Cuba was in the American “zone.” They had gone too far.

Still they did not forgive. His body was not buried in the Kremlin Wall as was done for other leaders. The reverse would also be true for our leaders.

My conclusion was that the idea of limited nuclear war was a recipe for general war; that the quest for supremacy was likely to lead to war; and, therefore, that the policy underlying the Cold War was unrealistic.

Obviously, those in a position to make the decisions did not agree. While limited and sporadic moves were made to ameliorate the U.S.-USSR relationship, particularly in the area of nuclear weapons, we continued to seek weapons superiority and political dominance.

Reagan’s Escalation

President Ronald Reagan escalated American weapons production with the aim of bankrupting the Soviet Union. Initially, the policy seemed to work. When the Soviet Union “imploded,” Reagan was given the credit. His policy seemed to vindicate the hard-line policy proposed 40 years earlier by Paul Nitze in NSC 68.

We now know that the Soviet collapse was caused mainly by its “Vietnam,” its disastrous nine-year war in Afghanistan that coincided with the Reagan administration. [This was the conclusion of British Ambassador to Russia Sir Rodric Braithwaite in Afgantsy: The Russians in Afghanistan 1979-1989 (2010).] That cause was largely overlooked.

So the wrong lesson was taken into the administration of Reagan’s successor, President George H.W. Bush. His advisers concluded that since the quest for military superiority worked, an even greater emphasis on it could be expected to work even better.

That assumption led to a far more radical approach to American foreign policy than had ever been contemplated. It was the program set out under the auspices of Under Secretary of Defense Paul Wolfowitz. (While it became known as the “Wolfowitz Doctrine,” the “Defense Planning Guidance of 1992” was written by Wolfowitz’s fellow neoconservative, the Afghan-American Zalmay Khalilzad, with the help of neoconservatives Lewis “Scooter” Libby, Richard Perle and Albert Wohlstetter.)

The “Wolfowitz Doctrine,” slightly toned down by Secretary of Defense Dick Cheney and Chairman of the Joint Chiefs of Staff General Colin Powell, set the tone for American policy for the next 20 years.

Taking advantage of Soviet weakness, the Wolfowitz Doctrine sought “to prevent the re-emergence of a new rival” and “to preclude any hostile power from dominating a region critical to our interests” and to “discourage them [our European allies] from challenging our leadership.”

If any of these challenges arose, the United States would preëmpt the challenge. It would intervene whenever and wherever it thought necessary. It particularly threatened the Russian government if it attempted to reintegrate such newly independent republics as Ukraine.

The Wolfowitz Doctrine, repackaged as the “National Security Strategy of the United States” was published on Sept. 20, 2002.  It justified President George W. Bush’s invasions of Afghanistan (for harboring Osama bin Laden) and Iraq (for allegedly building nuclear weapons). And, although it was not, of course, cited by the Obama administration, it laid the foundation for its policy toward Russia in Ukraine and explains some of the emerging policy of the American government toward China.

The attempt to use China against Russia, Secretary of State Henry Kissinger’s ploy, seemed to work, for a while, but has faded because both Russia and China realized that their immediate challenge came not from one another but from America.

Despite accommodations (as in Hong Kong), China is determined to realize at sea (in the southwest Pacific) and in international finance (with its establishment of a rival to the America-dominated World Bank, the Asian Infrastructure Investment Bank), its historic self-image as a major or even the central (Mandarin: Zhongguo) world power.

The Chinese policy confronts America with two choices: recognize and gradually accommodate the Chinese thrust into what it regards as its sphere of influence or try to thwart it. Early moves suggest that America will try, even militarily, to continue its established policy of blocking Chinese outward moves.

In short, it seems that we are at the beginning of a replay of the Soviet-American Cold War. But since history never exactly repeats, I will briefly consider the changes that are taking us into this new world.

The Arena of World Affairs

The modern and future arena of international affairs is the whole world; so the template of international affairs is and will be composed of and interplay of geography, climate, resources, technology and population. Changes in each are unprecedented. Today, we are at the onset of a new revolution. The revolution is already creating a new world in which older concepts of strategy are becoming irrelevant.

While we are still powered by coal and oil, we are in a race to make the transition to wind and solar power before we do irreparable damage to the planet. Lester R. Brown et al, point out in The Great Transition (2015) that solar and wind power costs are falling rapidly so that they are becoming competitive with coal and that, among other costs of fossil fuels, the rise of sea levels already has dramatic effects on agriculture in Asia. Many scientists believe we may be too late and that we will suffer catastrophic changes in our climate.

Avoiding that fate has not yet led to effective international cooperation, but as rising seas and deteriorating climate become increasingly severe, and prevent us from producing food as readily and economically, states will be forced to cooperate. Population is also being altered in size and in kind.

People today are more politicized than ever before but are also more susceptible to manipulation by the increasingly controlled and concentrated media (in America, not only is the media increasingly concentrated under a few major corporations whose profits depend on advertising with the exception of National Public Radio but there is increasing evidence of self- and outside censorship. For one instance, see The Nation, James Carden, “The crusade to ban Russia policy critics.”).

Populations of the advanced industrial states are aging while those of poorer areas are multiplying. Migrations of people from poorer areas are inevitable but are increasingly bitterly opposed in America and elsewhere.

Spread of disease by movement of peoples has been predicted to lead to pandemics. So far, advances in medicine and availability health care facilities have avoided the worst, but several diseases, including malaria, are still major killers in poorer areas and, in mutated form, could spread to even the rich North.

Our most critical resource, fresh water, is increasingly deficient. Drought already affects America, and attempts to overcome water shortage are flash points in relationships among countries in Africa and Asia.

Damming rivers in Central Asia as China is doing and in Kashmir as India is doing could be flashpoints for international conflict, while buying relatively well-watered lands in Africa, often corruptly, and evicting the inhabitants, as China and other countries are doing are likely to lead to popular resistance or guerrilla warfare.

What television began a generation ago has been multiplied by new forms of distribution of information. Even relatively poor people in remote areas have access beyond the imagination of even the rich and powerful a generation ago. Retrieval of information also allows far greater intrusion into the privacy of citizens and potentially control of them by governments. Cyberwar, a concept that hardly existed a few years ago, is a new arena of conflict among nations.

Projection of power is taking new forms. Armies are changing shape: large formations are passé and are being replaced by elite squads or special forces. Indeed, soldiers are being replaced by robots.

Spreading Nukes

Nuclear weapons, once an American monopoly, seem likely to spread in the coming decade beyond the nine states known to have them, nations to the “nth country.” As the war game I described above showed, any temptation to use them in “limited war” would be devastating for the whole world.

Particularly between Pakistan and India this is a clear and present danger. Elsewhere, especially in eastern Europe, the chances of accidents or “miscalculations” are ever present and perhaps rising. [See The Guardian, Ewen MacAskill, “Nato to review nuclear weapon policy as attitude to Russia hardens.”]

International trade will continue to grow but is likely to be increasingly controlled by governments; particularly in food grains, which are becoming harder to grow, governments cannot afford to allow market forces to control their ability to feed their citizens.

Monetary policy appears to be moving in the opposite direction. As the American economy is increasingly removed from supervision, concentration of wealth will continue and both the middle class and the poor will suffer. Cutbacks in social services and public works will increase the danger of a major turndown or even a depression. This could also affect foreign policy: it was, after all, the shift to a war economy that ended the Great Depression.

Under these pressures and trends, it seems to me likely that the need for more intelligent formulation of policy and more modest relations among peoples will become more urgent. The world of the future will arrive faster than we expect. Change is inevitable but a wise policy will seek to make it as smooth as possible.

So, in this perhaps not so brave new world, what do we really want?

Fundamental Objectives of U.S. Foreign Policy

The fundamental objective of American policy was clearly set out in the Foreword to the Constitution: “Establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure The Blessings of Liberty to ourselves and our Posterity.”

Put in less elegant terms, I suggest that the foreign affairs component of this fundamental objective is to achieve affordable world security in which we can pursue the good life and the “Blessings of Liberty.”

When our Founding Fathers gathered in Philadelphia in the summer of 1787, they were motivated and guided by fears of anarchy and tyranny. They sought a path between them in the Constitution they wrote: the Federal Government was to be strong enough to hold the Union together, but not so strong as to tyrannize the states that composed it. They regarded the United States as an experiment to find whether or not we could remain free and responsible participants in the management of our lives.

Since they assumed and hoped that we would live in a republic where the opinion of citizens has some ability to control government decision making, they believed, that to have a chance to combine liberty and responsibility, citizens needed to be educated. Enhancing the intellectual quality of our citizenry thus became essential in securing of “The Blessings of Liberty to ourselves and our Posterity.”

(By way of contrast, in Britain, the ignorance of the public made little difference since the aristocracy and the monarch made the decisions; in dictatorships like the Soviet Union and Nazi Germany, the public had even less influence. The danger in a democracy is manipulation of the public through control of the media, unlimited financial intervention in politics and the belief that it has lost control. Despite bouts of public “activism,” this sense is growing.)

Impressively well read in history, the authors of the Constitution saw militarism as the mother of tyranny. Their discussions make clear their fear of the ambition of leaders and manipulation of public sentiment. They wanted, above all, to prevent American government from copying European despots in the game of war. Thus, they specified that only in an actual attack on the United States was the president allowed to act independently. Otherwise, the legislature, speaking with multiple voices and representing diverse local issues, had to be convinced of the need for military action.

The delegates recognized that foreign military adventures were the biggest threats to the republic they were founding. This was because war would create such insecurity at home as to undermine our way of life, diminish our sense of trust in one another, denigrate our civil liberties,  undercut our respect for our social contract, the Constitution, and divert the fruit of our labor from “the general Welfare.”

Operational Steps Toward Achieving Objectives

Experience has shown that the Founding Fathers were right: it is in our foreign relations where the greatest danger to our overall objectives lies. So it is in foreign affairs where the need for a well-informed citizenry is greatest. But experience also shows that the public is subject to surges of emotion or “war fever” in which reason is overwhelmed. Faulty perception of danger has triggered moves that have threatened our “Domestic Tranquility.”

So, a fundamental challenge is posed for us: how can we, the citizens, acquire sufficient reliable information, trustworthy analysis and objective opinion on which to form our judgment of government decisions.

Citizens need help in addressing such fundamental questions as 1) is there a sufficiently serious threat to American security that requires American response? 2) what are the kinds of response (diplomatic, military, legal, economic) that could be implemented? 3) how likely to be effective are the various possible responses? 4) how costly would each of those responses be? 5) are there alternative, non-American, means of solving the problem we identify? 6) does whatever seems  to be the correct answer move toward a more secure, peaceful and productive world environment in which America participates?

For most citizens such questions are inscrutable. Not only do they lack knowledge and experience but they are not able to devote sufficient time to finding answers.  Consequently, they are apt to answer with incomplete or biased information or by emotion.

In his farewell address, George Washington pointed to this danger. As he wrote, by allowing passion rather than knowledge or logic to set policy, “The peace often, sometimes perhaps the Liberty, of Nations has been the victim.”

But, we have both personal and political experience in finding sensible answers. Whenever we face difficult problems, most of us seek advice. In matters of health and finance, for example, we seek the opinions of specialists who have the training and experience, and we try to guard against their having conflicts of interest.

Concrete Proposals

Here I suggest a way to apply our daily experience to public policy. It is to create a sort of foreign affairs ombudsman a council to provide information and advice for the public. There is precedent for this suggestion. Much of what I propose already exists:

Existing governmental information and analytical resources in foreign affairs are extensive. For over a century (since 1914), the American Congress has been advised by the Congressional Research Service. The CRS is an independent organization situated in the Library of Congress and is staffed by approximately 600 scholars who are recognized as expert in their various fields.

The President is advised on economic matters by the Council of Economic Advisers and on sundry other matters by the Office of Management and Budget whose predecessor organization was formed in 1921. It has a staff today of about 550.

The Secretary of State is advised by the Department’s small but highly regarded Bureau of Intelligence and Research. Finally, the director of Central Intelligence is provided with an analysis of the “product” or “take” of the 17 American intelligence agencies by the National Intelligence Council which grew out of the Office of National Estimates that was founded in 1950.

What I propose is the creation of an independent institution, a National Commission, composed of a council of perhaps a dozen senior officers and a staff of perhaps 50 men and women who are expert in the various fields related to foreign affairs. Both groups would be chosen by carefully crafted criteria after a “peer review” and on the basis of their credentials.

They would be obligated by contract not to go to or return to business, law or professions related to foreign affairs but would be given some form of tenure and generous retirement and other benefits. The aim would be to assure their lack of any conflict of interest.

Their task would be to study and report in the public domain upon the fundamental questions on which citizens should be informed. So they would be empowered to demand information without delay or hindrance from all government sources, authorized to hold symposia, conferences and seminars and to commission outside studies and reports. They would also be afforded adequate means of reaching the public through, for example, National Public Radio, press releases, magazine articles, pamphlets and books.

Of course, it is probable that much of the public will not read these materials. That is the worst case; the more likely result would be that they would set a standard which the Executive Branch, the Congress and the media would feel obliged to emulate; and the best case would be that the public education program would raise the level of citizen participation in matters of national importance.

Such an institution is not likely to be warmly welcomed by government officers, some of whom will see it as an intrusion on their “turf.” Congressmen, however, will at least verbally approve it since many of their constituents will welcome its reports. And the media or at least working journalists will find it a source to be tapped and so a welcome aid to their work.

The experience of the Congressional Research Service and the Bureau of Management and Budget suggest that in proper political circumstances creation of such an organization is not impossible.

In addition to the National Commission, we should resurrect a modern version of the educational programs that were begun just after the Second World War. Undertaking them was spurred by a recognition that we needed both to know more about the world outside our frontiers and before our lifetimes.

Programs in General Education were organized at Harvard (under James Conant) and Chicago (under Robert Hutchins), gave birth to publications (inspired by Sumner Wells) and funded by the major foundations. They were partly followed by subsidies given to universities for teaching exotic languages. Some of these efforts need to be revived and better focused on national needs.

Do and Don’t

I turn now briefly to a few major points on what we should not do: We should not attempt to force other societies or nations to transform themselves into our image of ourselves; we should not impose upon other nations puppet regimes.

While we have a legitimate need for intelligence, we should ban espionage which has proven to be so detrimental to our national image and purpose. That is, we should not engage in “regime change” or “nation building” as is currently practiced.

And, we should not sell arms abroad.  While we cannot suddenly abolish the military industrial complex, we can and should redirect the activities of our industry toward such domestic activities as fixing the thousands of dangerous and dilapidated bridges spanning our rivers, cleaning up our cities, engaging in massive reforestation, repairing or building schools, hospitals and other public facilities, repairing our roads and recreating a national high-speed rail network.

There is much to be done and we have the skills required to do it.

Lastly, I suggest a few points on what we should do: It is both in our long-term interests and in accord with our heritage to join and support the international legal system; we should financially support but generally not engage our troops in peace-seeking operations; we should continue our efforts to cut back, bilaterally, with Russia, nuclear weapons development and deployment and encourage other nations to move toward denuclearization; and we should support both American private and UN aid programs in the Third World.

In conclusion, we must come to terms with the reality that we live in a multicultural, multinational world. Our assertion of uniqueness, of unipower domination and of military power has been enormously expensive and has created a world reaction against us; in the period ahead it will become unsustainable and is likely to lead precisely to what we should not want to happen armed conflict.

Moderation, peace-seeking and open-mindedness need to become our national mottos.

William R. Polk is a veteran foreign policy consultant, author and professor who taught Middle Eastern studies at Harvard. President John F. Kennedy appointed Polk to the State Department’s Policy Planning Council where he served during the Cuban Missile Crisis. His books include: Violent Politics: Insurgency and Terrorism; Understanding Iraq; Understanding Iran; Personal History: Living in Interesting Times; Distant Thunder: Reflections on the Dangers of Our Times; and Humpty Dumpty: The Fate of Regime Change.




Losing the American Republic

Decades of letting neocons dictate a hawkish foreign policy have put the American Republic in profound danger, just as presidents from George Washington to Dwight Eisenhower predicted, warnings that Americans must finally take to heart, says ex-U.S. diplomat William R. Polk.

By William R. Polk

In The Financial Times of April 23, Philip Stephens begins a perceptive article with the obvious statement that “It is easier to say that Obama never gets it right than to come up with an alternative strategy.”

Of course it is. It was never easy to construct a coherent policy, but it was never impossible. The problem we face today is different. It is that for a long time we have not been presented by our leaders with any strategy. So the obvious question a citizen (and a taxpayer) should demand be answered is why, despite all the effort, all the proclamations and all the lives and money we are spending, does almost every observer believe that we do not have a policy that we can afford and that accomplishes our minimal national objectives? In this first part of a two-part essay, I will address that problem.

In short, where is the problem? It is tempting to say that it is our lack of statesmen. Where are the heirs to the men who put the world back together again after the Second World War? By comparison to those who we empower today, those earlier leaders appear heroic figures.

True, they had monumental faults and made costly mistakes, but they thought and acted on an epic scale and tried to cope with unprecedented problems — the reconstruction of Europe, the ending of colonialism in Africa and imperialism in India, the amalgamation of scores of new nations into an acceptable structure of the world community and the containing of unprecedented dangers from weapons of mass destruction.

Today, only half joking, Europeans say that they see only one world-class statesman — German Chancellor Angela Merkel. I seek but find no comparable leaders on the American scene. As Mr. Stephens judged, “Barack Obama has led from behind on the global stage [while] Republicans [are thinking only in terms of] a bumper sticker world.”

We may lament our poverty of leadership, but there are ways to make it function. “Princes,” since long before Machiavelli have always used advisers; some even listened to them. Surely the capable people among us — like the “wise men” who whispered in the ears of those earlier leaders — can guide today’s leaders toward more viable policies and away from the chaos that engulfs us.

Why is this not happening? Is it that what they have to say is not “popular” or that they cannot reach the decision makers? Or that the structures we have built into our political and economic systems block them? Is it the enormity of the problems we face? Or is it that we lack information? Or is it the want of a matrix or framework in which to place what we know and to decide on the feasibility and affordability of what we want?

More fundamentally, could it be that we, the citizenry, the voters and the taxpayers,  simply do not care enough or keep ourselves well enough informed to make our leaders perform the tasks they avidly seek and we pay them to do?

Each of these possible causes of our current malaise urgently demands our attention. Let me briefly look at them and then move, in my second essay, toward a guide to a viable policy.

Complex World

First, let us admit that the world is indeed more complex today than in earlier times. There are more “actors” and at least some of them have to perform in front of audiences that are more “politicized” than they used to be. Nationalism affects more people than a century ago, and today it is laced with religion in an explosive mixture. A spreading and intensifying sense of fairness and minimal rights shapes actions among peoples who used humbly to submit. Bluntly put, fewer people today are willing to suffer or starve than were their grandparents.

Second, nations that hardly existed are caught up in insurgencies, guerrilla wars and various forms of violence. Supra- or non-national religious movements are not new, but they have become very “worldly” and are now sweeping through Africa and Asia. Some are sowing hatred and massacring or driving into exile whole populations. At the same time, corrupt governments and “warlords” impoverish societies while outside manipulation by force of arms and “dirty tricks” further destabilizes or even destroys political order, leaving trails of shattered lives.

The outside quest for “regime change” has plunged many developing countries into chaos. Floods of migrants pour out in desperate quest for safety while many of those who remain will die wretchedly as they watch their children grow to adulthood stunted from sickness and hunger.  We and several “theys” are stirring the pot. But, regardless of who created these problems, they must be faced today. And they are certainly complex.

Third, while events are certainly complex, we know an astonishing amount about them. Never in human affairs have so many studied so much. So our leaders are primed to do their jobs. At least they should be. Information is not lacking.

In the United  States, we employ some 17 intelligence agencies manned by upwards of 100,000 presumably skilled people, a Department of State and associated agencies employing (at my last count) nearly 20,000 officers, a White House staff including the National Security Council numbering in the hundreds, a galaxy of war colleges through which pass most of the senior officers of over half of the world’s military and security services,  dedicated staffs and subsidized “think tanks” like RAND and more or less independent think tanks like the Council on Foreign Relations, Brookings, etc.

The media doesn’t do as much as it used to do to educate us, but it is now augmented by “blogs,” opinion pieces, reports and memoirs. Multiple organizations of the United Nations and hundreds of non-governmental organizations provide almost daily accounts of every human activity. And some people still read and even write books.

Even those of us who, by government criteria, have no “need to know,” have access to most of this flood of information. Some is withheld from those of us our Government does not “clear” to receive it, but most of the withheld or at least delayed information is actually about “us” — the covert activities, foibles, misdemeanors  and crimes of our team.

Our leaders are keen to inform us about the (false) beliefs and (dangerous) actions of foreigners. And even if Government often does not help us to understand other peoples, most of what we need to know about them is available in the public domain beyond the reach of government censorship.

So censorship is not the only reason we are not well informed. We citizens must accept much of the blame. Many of us sit on vast “dry” islands where the floodtide of information does not reach or where we or others have built dykes to keep it out. We have allowed the media to drop the pretense of informing us; its job is to entertain us.

When “news” is read out by attractive “presenters,” it is also a form of entertainment. Television is not conducive to difficult issues. It is best on “sound bytes.” But it is not only the nature of the media that is formative: most observers believe that it is in large part our laziness or lack of concern that keeps us ill-informed and little engaged. We read little and seek reassurance more than knowledge. Above all, we wish to avoid being challenged.

Easy Opinions

As Alex de Tocqueville observed of us, “the majority undertakes to supply a multitude of ready-made opinions for the use of individuals, who are thus relieved from the necessity of forming opinions of their own.”

And it is not just opinions or judgments on contemporary affairs, but even general knowledge that is missing. Surveys show that many Americans do not know where Vietnam, Ukraine and Afghanistan are. Some could not find our national capital on a map. As Aaron Burke remarked in the Feb. 14, 2014 Washington Post, some of our would-be ambassadors knew nothing of the character, politics, language, religious affiliation of even those countries to which they were being sent.

Sen. John McCain, R-Arizona, was filmed on C-Span commenting that some of the nominees were “totally unqualified.” In this, sadly, they represent us. [See: Michael X. Delli Carpini and Scott Keeter, What Americans Know About Politics and Why It Matters (1996). Chapter 6, “The Consequences of Political Knowledge and Ignorance.]

Is this ignorance important? The French conservative philosopher, Josef de Maistre answered that it is because “every nation gets the government it deserves.” If citizens are uneducated or passive, they can be controlled, as the Roman emperors controlled their peoples with bread and circuses, or as other dictatorships have with “patriotic” demonstrations or manufactured threats.

Indeed, a people can make themselves willing dupes as the Germans did when they voted Hitler into power in a free election. Ignorance and apathy are the pathogenes of representative government. Under their influence, constitutions are weakened or set aside, legislatures become rubber stamps, courts pervert the law and the media becomes a tool. So, even in a democracy, when we duck our civic duties in favor of entertainment and do not inform ourselves, the political process is endangered.

Danger, as our Founding Fathers told us, is ever present. They thought of our system as an experiment and doubted that we could maintain it over time. We have come close to losing it. And today we see signs of its fragility.

American ignorance and apathy extend even to issues immediately affecting the lives of most of us — like jobs, housing, food and health — and when it comes to devoting attention to such possibly terminal issues as nuclear war, baseball always wins. The choice, as the expression goes, is a “no brainer.”

This can be disastrous because, as our first president warned us, unscrupulous politicians can manipulate the public. George Washington found this particularly dangerous in foreign affairs. As he wrote in his Farewell Address, the dangers inherent in dealing with other countries may lead to “the necessity of those overgrown military establishments, which under any form of Government are inauspicious to liberty, and which are to be regarded as particularly hostile to Republican Liberty.”

His words demand our attention because we all welcome comfortable simplicity in place of confusing complexity, and it is in military affairs where the lack of statesmanship among the leaders and ignorance among the people is most clear.

In one of the great theatrical gestures known to history (or legend), that eagle among the hawks, Alexander the Great, demonstrated the easiest way to deal with complexity. To untie the Gordian knot — the very symbol of complexity — he simply cut it. His point was that there is no need to understand if one has a sharp knife.

Alas, as the decades of the cutting of knots in Vietnam, Afghanistan, Iraq, Syria and other places has shown, no matter how sharp the knife, the knot may not be so neatly sliced as Alexander thought. Indeed, as we have observed in our recent wars, “knots” prove capable of reuniting their coils.

President Washington’s Wisdom

George Washington, judged by today’s standards, was neither so well informed nor so lavishly advised as are modern American leaders, but at least on war and peace his instinct was sure and at the end of his career, he embodied the American myth of national decency.

In his “Farewell Address,” he told us that the only safe — because moral — policy is to “Observe good faith and justice towards all Nations. Cultivate peace and harmony with all. … In the execution of such a plan nothing is more essential than that permanent, inveterate antipathies against particular Nations and passionate attachments for others should be excluded [because] The Nation, prompted by ill will and resentment sometime impels to War the Government, contrary to the best calculations of policy. … The peace often, sometimes perhaps the Liberty, of Nations has been the victim. … Real Patriots … are liable to become suspected and odious; while its tools and dupes usurp the applause and confidence of the people, to surrender their interests.”

Partially echoing the values Washington hoped would underlie American action and reacting to the far stronger forces that have grown as America grew, Dwight Eisenhower proclaimed during the 1956 joint Anglo-French-Israeli attack on Egypt that we must all be governed by “One Law,” not one law for us and our friends and another for other states.

On the eve of his departure from the White House, Eisenhower picked up and expanded another of Washington’s — and the Founding Fathers’, (who were deeply suspicious of the military and of the people’s ability to control it) — main themes, the danger of “those overgrown military establishments, which under any form of Government are inauspicious to liberty, and which are to be regarded as particularly hostile to Republican Liberty.”

Against the power of “the military-industrial complex, ” Eisenhower memorably warned that “Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed. This world in arms is not spending money alone. It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children. The cost of one modern heavy bomber is this: a modern brick school in more than 30 cities. It is two electric power plants, each serving a town of 60,000 population. It is two fine, fully equipped hospitals. It is some 50 miles of concrete pavement. We pay for a single fighter plane with a half million bushels of wheat. We pay for a single destroyer with new homes that could have housed more than 8,000 people. …

“This is not a way of life at all, in any true sense. Under the cloud of threatening war, it is              humanity hanging from a cross of iron.”

To judge how little we have heeded his warning, just multiply the figures Eisenhower cites for the costs of the guns, warships, rockets and planes. When he spoke, our aggregate cost of all the tools of war was about $320 billion; today the cost (in inflation adjusted dollars) is more than double that amount and also is larger than the aggregate outlay of all other nations.

And, beyond the monetary cost thus measured is the security cost — the world has become far more dangerous at least in part because of our emphasis on our military role. So, Eisenhower questioned, is this “the best way of life to be found on the road the world has been taking?”

Is anyone who has his hand on the wheel, that is any responsible leader, seriously considering whether there is a smoother, safer, more economical and less painful road? If so, I have failed to identify him or her. And, apparently, neither has Mr. Stephens of The Financial Times.

Bowing to the Military

One aspect of this problem is that the military, drawing on the prestige it gains as our defender, is vastly over funded and catered to by both the Executive Branch and the Legislature. As Washington and Eisenhower feared, they have become a state within our nation. This is evident in almost every aspect of the comparison between the military and civilian parts of our government.

Consider the contrast with the Civil Service. The contrast is as sharp in America as in “tin pot” dictatorships in the Third World. When I served in government, I observed that any general and many colonels could summon up an Air Force plane for a junket whereas even the Under Secretary of State had to get special clearance from the President and then negotiate with the Pentagon for official trips; then there were and still are wildly disproportional side benefits given to the military and what amount to penalties assessed to the civilians.

For example, roughly half of all ambassadorial appointments were removed from the Foreign Service and given to non-professionals. As Edward Luce wrote in the Dec. 7, 2014 Financial Times, “imagine how [much] harder it would be … to recruit talented military officers if plum generalships were handed out to amateurs who had never worn a uniform.”

The transformation of America into a military culture has deep roots. Arguably it began long before the formation of the Republic in the settler wars with the Native Americans. In the “young republic,” it was carried forward in the War of 1812, Andrew Jackson’s push  into “the Floridas” and James K. Polk’s war with Mexico. Then, during and after the Civil War, Americans became truly a warring people. (This is the title of the interpretive history of the American people on which I am at work.)

This legacy was carried forward in two world wars, hundreds of smaller military actions and a half century of Cold War. In 2013, Richard F. Grimmett and Barbara Salazar Torreon reported to Congress on “Instances of Use of United States Armed Forces Abroad” from 1798. They found five declared war, six undeclared wars and hundreds of other military actions. [Washington D.C.: Congressional Research Service.]

Few Americans, I suspect, are fully aware — despite scores of books and hundreds of articles — of the dimensions of our country’s commitment to the military establishment and the “security” culture embedded in it. Eisenhower’s Military-Industrial Complex has grown not only in size but in spread. It now shapes Congressional action, influences media reporting and convinces labor to cooperate in its projects. Indeed, it is built into the fabric of American society and economy to an extent that would have terrified the Founding Fathers.

Beyond the Military-Industrial-Congressional-Media-Labor Complex, as it has become, are three other powerful aspects of the “security state.” The first of these is the creation of a more or less autonomous elite army within the standing army which, itself, is apart from what the Founding Fathers thought of as our prime military force, the state militias.

This Special Operations Force, according to the Congressional Research Service in 2013 (the latest available figures) was composed of some 67,000 troops and operated under a separate budget of about $7.5 billion. It has its own “think tank,” sources of intelligence, school and even its own magazine (Special Warfare) that prints favorable articles by journalists from all over the world on “politico-military” affairs.

The second aspect of the growth of the military is in overseas bases. They are believed to number over 1,000 and are located in about 63 countries. These figures do not include the “floating bases” on aircraft carriers, troopships and “insertion” vessels nor, for the most part, the bases jointly operated with other countries and special intelligence facilities.

The third aspect is the extension of the military into “security” and intelligence fields that are partly or wholly funded by the Defense Department and often are commanded by serving military officers. According to a recent book, 1,074 new federal government organizations, the existence of which is “classified” and generally withheld from public knowledge, and nearly 2,000 private companies work out of at least 17,000 locations within the United States and an unknown number abroad.

Exceeding Authority

More unsettling but not surprising is that with so much power behind them, some senior military commanders feel able to step outside of their statutory roles to pontificate on affairs beyond their competence and authority. One who this year frightened our European allies was U.S. Air Force General Philip Breedlove, the head of NATO’s operational command. He was taken to task by German Chancellor Merkel, as reported in the March 7, 2015 issue of the respected German weekly Der Spiegel, for “dangerous propaganda” in publicly recommending policies verging on warfare with Russia.

German Foreign Minister Frank Walter Steinmeier intervened personally with the NATO General Secretary because of Breedlove’s statements. Breedlove’s action was not unprecedented. General David Petraeus essentially ran American affairs in Afghanistan and Iraq while treating the statutory American authority, the ambassador, as a junior partner.

In “The Killing Machines” (The Atlantic, September 2013), Mike Bowden recounts the argument between U.S. Ambassador to Pakistan Cameron Munter in 2011 and CIA Director Leon Panetta over the ambassador’s authority to veto assassinations. Munter quoted Title 22 of the U.S. Code of Federal Regulations that made the ambassador the chief American authority in the country to which he was appointed. “That means,” commented Bowden, “no American policy should be carried out in any country without the ambassador’s approval.”

Panetta took the dispute to President Obama who ruled in favor of the CIA. Elsewhere also, senior military officers have frequently violated the word and the intent of the Framers of the Constitution in forming and proclaiming policies. In the most famous case of assumption of such powers in the past, President Harry Truman fired General Douglas MacArthur. That was long ago.

It isn’t only, as the American psychologist Abraham Maslow is quoted as saying, “if you only have a hammer, you tend to see every problem as a nail,” but also that ambitious men naturally seek opportunities. In business, they seek money; in the military they seek promotion. Pursuing these goals is often admirable but unchecked it also creates dangers or harms the public interest.

History writings are full of accounts of generals who destroyed civilian regimes and often destroyed republican liberty. A prudent people will insist that its government both use its military when necessary and always control it. Fear that the people would fail to do so animated the discussions of our Founding Fathers when they were writing our Constitution in 1787. [Madison, Notes, passim.] Our first military leader warned us of the danger as I have quoted him above.

The Iraq War Disaster

So now consider what we have been doing on the two major American wars of the post-Vietnam years. Because I have written on them in detail elsewhere, I will only touch on those aspects that will flesh-out the skeleton I have sketched above or illustrate why we need to avoid tactical lunges and adopt strategic thinking.

I begin with Iraq. Iraq illustrates failure to understand the context in which we act, our propensity to jump before looking and our role in creating a security threat. [I have dealt with Iraq intensively in Understanding Iraq (New York: HarperCollins, 2005).]

Consider first the context:  Iraq was one of the many countries that evolved from the collapse of imperialism. Put together by the British at the end of the First World War from three provinces of the Ottoman Empire under an imported and British-controlled monarchy, it never found a secure political identity. To control the country, the British built a military organization that in comparison with other aspects of the regime and the society was strong. Consequently, Iraq suffered military coup after coup.

Most incoming dictators were simply predatory, but the last in the sequence, Saddam Hussein, made Iraq socially and economically one of the most advanced countries in Africa and Asia. Profiting from increasing oil wealth, he promoted the growth of a middle class, secularized the regime and provided the public with free health services and free education. Whereas in 1920, under the British, only 30 Iraqis were receiving secondary education (and the British thought that was too many), in 1985 the student population reached nearly one and a half million.

The number of doctors went from 1:7,000 to 1:1,800 and life expectancy rose from 40 to 57 years. Schools, universities, hospitals, factories, theaters and museums proliferated. Saddam’s aim was power, and like many Third World leaders he was not an attractive person, but perhaps without meaning to do so, he set in motion events that would have forced Iraq to become a more democratic society. “Would have,” that is, had development not been short-circuited by war.

The first war began in September 1980 with an Iraqi attack on America’s enemy, the revolutionary Iranian government led by Ayatollah Khomeini, that had overthrown the government of America’s ally, the Shah. The American government took a short-sighted view of the war and assisted the Iraqis with provision of the most sophisticated intelligence then available (which enabled the outnumbered Iraqis to defeat the Iranians in crucial battles), but at the same time it supplied Iran with lethal military equipment (in the Iran-Contra affair).

Both the Iraqis and the Iranians realized that America was playing a cynical game. Henry Kissinger summed it up by saying, “It’s a pity they both can’t lose.” It does not seem, in retrospect, that serious thought was given to how war would impact on both societies and on American interests. This is borne out by the extension of the war to Kuwait.

Kuwait was another of the legacies of imperialism. In the eyes of every Iraqi leader, including its British-installed and American-favored three kings, Kuwait was an Iraqi province. It was the British who had forced the Ottoman Empire to give it quasi-autonomous status in 1913 — and in 1923 got both the puppet Iraqi government and the precursor of the Saudi state to recognize its frontiers.

Initially, Britain was interested in using it to block any threat to its Indian empire. Following Indian independence in 1947, that interest was replaced by the special relationship under which newly oil-rich Kuwait invested heavily in cash-starved England. Additionally, both Britain and America were keen that it keep its separate status so that no one Middle Eastern power dominate oil production. Then, for reasons that are still obscure but certainly evinced a lack of strategic thinking, the American government gave the impression that it would not oppose the Iraqi attempt to take over Kuwait.

It happened like this: The war with Iran lasted eight years, killed tens of thousands Iraqis and cost about $15 billion yearly. (Proportionally, the Iraq-Iran war was more costly than the American war in Vietnam.)  Saddam Hussein proclaimed that he was fighting Iran on behalf of the Arabs and particularly of the Kuwaitis who had a deep fear of Iranian aggression. [For more background on Iraq’s invasion of Iran, see Consortiumnews.com’s “Saddam’s Green Light.”]

Souring on Saddam

Initially at least, the Kuwaitis (and other Arab leaders) agreed with him and supported his war effort.  But as the fighting stalemated, they not only stopped their aid to Iraq but demanded repayment of what they had lent. Saddam had used up all of Iraq’s reserves. The price of oil fell below what could sustain his regime. He became desperate. He begged and pleaded but to no avail.

A violent man, Saddam decided to take what the Kuwaitis would not give, but, himself a crafty politician, Saddam sought American approval. He probably thought America “owed him one” for having fought its enemy, Iran. So he thought America might agree to his reclaiming Kuwait.

When he met with U.S. Ambassador April Glaspie, she (on orders) told him that the U.S. Government “took no position on Arab frontiers.” Saddam took this to be a “green light” — like President Gerald Ford and Secretary of State Henry Kissinger given to Indonesia’s General Suharto to reclaim East Timor — and invaded. [Kissinger and others denied it at the time, but we now have access to the documents and know that they condoned and conspired a few years before, in 1975,  with the Indonesian dictator General Suharto, certainly no more attractive a figure than Saddam, on the invasion. (See Briefing book 62 in the nsarchive.gwu.edu/NSAEBB/NSAEBB62/‎)]

The American ambassador told The New York Times that no one thought (with no sense of history and apparently no appreciation of Saddam’s desperation) the Iraqis would take “all of Kuwait!”

The Americans and others, including the Russians, reacted sharply. Kuwait’s assets were frozen out of Saddam’s reach. The UN demanded an Iraqi withdrawal. And Saddam became even more desperate. Some in the American government apparently believed that the Iraqis might plunge into Saudi Arabia eastern province where its oil fields are located. So America put together a coalition, including Saudi Arabia and Syria, to chase the Iraqis out of Kuwait. It was successful. President George H.W. Bush ordered the invading forces to break Saddam’s army but not to occupy the country.

However, the war against Saddam was allowed to spill over into actions that were not then foreseen by American leaders and for which the United States and Iraq would pay a fearful price. The U.S. acted in ways that increased Saddam’s desperation and increased his sense of humiliation. It also allowed or perhaps even condoned actions that promoted sectarian — Sunni-Shiite — hostilities to a level not experienced in the Islamic world for centuries.

And, by giving the impression of hostility toward all aspects of Islam, the U.S. shifted such previously anti-Saddam activists as Osama bin Laden into leaders of a jihad against America. Little or no thought was given, apparently, to how the initial objective of getting the Iraqis out of Kuwait could be turned into a stable and constructive result.

Much worse, of course, was to follow a decade later in the George W. Bush administration. The U.S.-led invasion of Iraq was not caused by Saddam’s attack on Kuwait but was a deliberate act of aggression. It was justified to the American public by the allegation that Iraq was developing nuclear weapons and allegation that Bush knew to be false; he simply ordered his Secretary of State, General Colin Powell, to lie to the public and America’s allies.

Whereas George Washington had warned in his Farewell Address that “The Nation [that is, the public], prompted by ill will and resentment sometime impels to War the Government, contrary to the best calculations of policy,” George W. Bush’s Government deceived the Nation. As Washington also warned, the “Real Patriots” — who, in the Iraq case, realized what was happening and spoke out — “are liable to become suspected and odious; while its tools and dupes usurp the applause and confidence of the people, to surrender their interests.”

Those interests included preservation of the lives of at least 4,500 soldiers who died and the several hundred thousand American soldiers who were wounded. Also of interest were the expenditure of some $2 trillion in treasure, the 2.6 million men and women whose labor could have contributed to the American economy. Less tangible but no less real was the goodwill that America had long enjoyed among all Iraqis and other peoples and a peace that has been lost in unending war.

This was all predicted and much could have been avoided. It is notable that even David Kilcullen, Bush’s strategist who had been recruited by Deputy Defense Secretary Paul Wolfowitz and relied upon by General David Petreaus, was quoted as saying that “Perhaps the most stupid thing about Iraq was invading the country in the first place.” [Ken Sengupta, “David Kilcullen: The Australian helping to shape a new Afghanistan strategy,” The Independent, July 9, 2009.]

The Afghan Quagmire

I turn now to the failure of American policy in Afghanistan.

The people of Afghanistan at least since the time of Alexander the Great had repeatedly and violently demonstrated their determination not to be ruled by foreigners. In 1842, they inflicted the worst defeat the British army suffered in the Nineteenth Century. Soberly, the British then recognized that they were not going to transform the Afghans and that attempting to do so was not worth the cost.

So, essentially, they played their new version of “the Great Game” by Afghan rules. They bribed, cajoled and flattered the Afghan rulers and where they could and at little cost fought a sort of French-Moroccan Beau Geste or American-style “Wild West” campaign on the Northwest Frontier against the tribal peoples. They recognized that what they really wanted — to keep the Russians out of South Asia — didn’t require more.

When their turn came, the Russians were not willing to take such a detached approach. In 1979, they dived into Afghanistan and tried, as they were doing in their Turkish Central Asian provinces, to Russify and partially to Communize it. Their policy was more than a failure; it was a catastrophe. [The best account is Rodric Braithwaite, Afghantsy: The Russians in Afghanistan 1979-89 (London: Profile Books, 2011).  Also see William R. Polk, Violent Politics (New York: HarperCollins,2007) Chapt. 11.]

The war was a catastrophe both for the USSR,  which the Afghans played a major role in destroying, and also for Afghanistan, which became a “failed state.” It was that failed state — a shattered, warlord-plagued maelstrom — the Russians had left behind that the Taliban movement tried to overcome with a violent assertion of primitive “Afghanism.”

Objectively, America never had any compelling interest in Afghanistan. It had no known major resources, was poor, backward and remote. Moreover, anyone with a slight knowledge of history would know that  it had proven to be one of the most difficult countries in the world to rule, much less to “regime change” or “nation build.”

Not only had the Afghans defeated the British and the Russians but they tolerated only a modicum of control by their own government. Each village or small neighborhood of villages ruled itself and was rigidly locked into traditional culture. Largely based on Islamic law but including elements that were pre-Islamic, the social code featured segregation of women, revenge for insult (badal), protection of refugees (melmastia) and absolute independence. In the south, it was known as the Pukhtunwali. That culture was not to America’s liking, but it was Afghan’s culture. Slowly and cautiously, it had been evolving toward a more “enlightened” and liberal pattern.

Evolving, that is,  when left more or less to its own devices. When under attack, Afghan society closed upon itself and reverted to customs that the Russians had found (and Americans would find) objectionable. Generally, however, at least the Americans have not found disapproval of customs to be a sufficient reason to invade other societies. What caused the American invasion was, ironically, a playing out of two commands of the Pushtunwali, the Afghan “way.”

Misunderstanding Afghanistan

First was the absolute imperative of the Afghan way, the granting of  protection (malmastia) to fleeing warriors. The Taliban honored this tradition by giving sanctuary to Osama bin Laden whose followers in the al-Qaeda movement had attacked America in 2001. The U.S. Government demanded that Osama be handed over. The Afghan Government refused. To have done so would have been, in Afghan eyes, a mortal sin.

So, second, America itself employed another recognized part of the Afghan code, badal, or revenge. It attacked. As the then Taliban Minister of War later told me, “we understood your desire for revenge. … It is also our way.”

It was the Afghan way, but was it either necessary or useful to America? Put another way, could American objectives have been accomplished at lesser cost in another way?           

To answer that question requires a definition of objectives: First was the objective of the American political leaders. They believed that they had to demonstrate toughness. About nine in 10 Americans (and between six and seven Britons) favored the invasion. It was easy for President Bush to ride the popular surge. Indeed, he not only rode but spurred on war fever.

Second, as George Washington had long before warned, “The Government sometimes … adopts through passion what reason would reject.” Reason would have avoided a ruinous war. But instead of adopting the course demanded by the national interest, or trying to think with the public through the options, Bush played on popular emotion. The Taliban were bad and America had to punish them.

Third, on their side, the Taliban leaders knew that a war would be ruinous for them. They were not very adroit, but they tried to find a way to avoid it. They could do so only within the code by which they lived. To have met the American demand to surrender Bin Laden would have been a mortal sin, but they had some flexibility in applying  malmastia — they had to protect Bin Laden but need not to allow him to act as he might wish. So they took him into “protective” custody and proclaimed that they would prevent him and his followers from engaging in further foreign activities. It is not clear that the Bush administration even considered any possible variation of that option.

So Bush ordered the attack. Despising the ragged, ill-armed guerrillas, the Americans struck. The war might have ended in a bloody but limited raid. Instead, without much thought, it morphed into a conflict that, so far, has lasted nearly 14 years, has cost America 2,357 dead, perhaps 50,000 wounded and at least $1 trillion.

The number of Afghans killed or wounded is not known but is certainly in the hundreds of thousands; the sick and malnourished amount to nearly half the population;  a whole generation of children have been “stunted” and will never grow to full potential; the traditional civic order has been replaced by a corrupt and brutal collection of mafias that both engage in the largest drug business in the world and also steal (and ship abroad) billions of U.S. aid dollars.  There is no light at the end of that tunnel.

I find no evidence that the U.S. government at any point from before the invasion to the present carefully considered whether or not it really had any  strategic interest (the Russians were in full retreat and we no longer had a compelling interest in protecting India) in Afghanistan. It simply took whatever seemed to be the next step as the trajectory of events seemed to dictate and, since other than bribery it had little to offer, those steps were military.

During the last 14 years, we have relied almost exclusively on military action. At first, the action was “boots on the ground.” Recently, in our attempt to cut American casualties, we have shifted largely to “coercive air power.” [Robert Pape, New York Times. April 21, 2015.]

Our aim has been to “decapitate” the guerrilla forces and to beat down insurgent attacks. Both have failed. On the one hand, as we have killed more senior and experienced leaders, younger and more ambitious or violent men have replaced them, and, on the other hand, surveys show that guerrilla action has increased — not been  suppressed — in and around areas that have been attacked by drones or special forces.

If we cannot win, have we tried negotiation? No, in fact we have made any form of negotiation virtually impossible. Among our moves, one stands out baldly: the American military and the CIA have maintained a “kill list” of insurgents to be shot on sight. Because the list is secret, no Talib can know if he is on the list. So he is apt to suspect that any offer of negotiation is really a trap, designed both to kill him and to divide and weaken his movement. [As discussed by Jo Becker and Scott Shane in the New York Times, May 29, 2012. I have discussed this and other aspects of the Afghan conflict in a series of essays in my book Distant Thunder (Washington 2011).]

The cost of our failure to win or negotiate is still being paid: we are still engaged in combat, still striking targets, still shoveling in billions of dollars to a failed puppet government. And in this unending war, we have created far more enemies than we have “pacified” or killed. Now they come not only from Muslim Asia and Africa, but even from Europe and America. They are enemies we helped to create. We were sold a phony policy and self-defeating means to implement it: counterinsurgency never worked anywhere and certainly has not worked in Afghanistan.

Lessons Needed Learning

It would be rewarding if one could say that our experience in Vietnam, Iraq and Afghanistan has made us wiser in our approaches to Somalia, Syria, Libya and Yemen, but it is hard to substantiate that conclusion. Yet the lessons are there to be learned. There are more, but consider just these few:

–Military action can destroy but it cannot build;

–Counterinsurgency does not work and creates new problems;

–Nation building is beyond the capacity of foreigners;

–Piecemeal, uncoordinated actions often exacerbate rather than solve problems;

–The costs of military action are multifold and usually harm not only the attacked but also the attacker’s society and economy;

–Reliance on military action and supply of weapons to the client state encourages it to undertake actions that make peace-seeking harder rather than easier;

–War radiates out from the battlefield so that whole societies are turned into refugees. In desperation they flee even far abroad and create unforeseen problems.

–The sense that the attacker is a bully spreads and converts outsiders into enemies;

–Failure to understand the society and culture even of the enemy is self-defeating;

–Angry, resentful people eventually strike back where they can and so create a climate of perpetual insecurity.

The result of such actions is deforming to the central objective of an intelligent, conservative and constructive American foreign policy — the preservation of our well-being. So, in the second part of this essay, I propose to show how we might begin to approach strategic thinking to accomplish our central national objective.

William R. Polk is a veteran foreign policy consultant, author and professor who taught Middle Eastern studies at Harvard. President John F. Kennedy appointed Polk to the State Department’s Policy Planning Council where he served during the Cuban Missile Crisis. His books include: Violent Politics: Insurgency and Terrorism; Understanding Iraq; Understanding Iran; Personal History: Living in Interesting Times; Distant Thunder: Reflections on the Dangers of Our Times; and Humpty Dumpty: The Fate of Regime Change.




‘Christianists’ Howl at Obama’s Truth-telling

Though founded by a pacifist who spoke for the oppressed, Christianity has contributed to more wars, injustices and genocides in all corners of the world than any other religion. But President Obama’s glancing reference to this reality prompted howls of protests, as ex-CIA analyst Paul R. Pillar notes.

By Paul R. Pillar

President Barack Obama gave a speech last week at the National Prayer Breakfast that was instructive, reasonable, accurate and fair. It also contained messages that are all the more important to hear and heed in light of some of reactions to the speech itself.

I’m not talking about the usual reflexive Obama-bashing, which happens all the time and is not worth paying attention to. I am referring instead to reactions that indicate some more fundamental attitudinal problems that jeopardize not only U.S. foreign policy but also some core American values.

Some of the most outlandish reactions, such as former Virginia Gov. Jim Gilmore’s comment that Mr. Obama’s remarks at the event were “the most offensive I’ve ever heard a president make in my lifetime,” probably reflect these problems and are not just the familiar garden-variety partisanship.

Mr. Obama’s remarks included upbeat and informal comments about the Dalai Lama’s presence and an earlier speech by stock-car race driver Darrell Waltrip. They also included some observations, which seemed to get all the attention in the subsequent reactions, about how at different times through history different religions have been “twisted and distorted, used as a wedge,” sometimes with outrageously inhumane consequences.

But the core of the speech consisted of three main points. The first was a call for “some basic humility”, for a recognition that “the starting point of faith is some doubt,” and that we should not be so full of ourselves that we think “God speaks only to us” and “somehow we alone are in possession of the truth.”

The second point concerned the need “to uphold the distinction between our faith and our governments, between church and state.” And the third was to affirm the “Golden Rule that we should treat one another as we wish to be treated.”

It is hard to see how any American who isn’t in active denial about the benefits to mankind of the Enlightenment could disagree with any of those three points. As for the first, and the President’s preceding comments about how all religions, including Christianity, have at times been twisted for nefarious purposes, as E. J. Dionne observes, if acknowledging one’s imperfections were to be considered an insult to one’s religious faith, that would make St. Augustine a heretic.

The second is a bedrock principle of the American political system, enshrined in the U.S. Constitution. The third is at the center of any ethical system apart from rationalizations of selfishness à la Ayn Rand.

In many ways, unfortunately, the United States has in its dealings with the rest of the world repeatedly flouted both the principle of humility and not assuming a monopoly of truth and the principle of treating others as we would want to be treated.

We could go on at great length on those themes, but sticking to strictly religious issues leads to a comparably disturbing observation: that American discourse and American politics have been moving ever farther from separation of faith and government, and toward having the United States take sides in favor of some religions over others. This trend manifests itself in several ways.

One way is in the prominence and power in the United States of Christianist politicians, who are every bit as worthy of that descriptor as many politicians elsewhere merit the label Islamist. Overt religiosity among American political leaders and their tendency to apply religious faith to public policy issues has waxed and waned through different phases of the Republic’s history, but the trend over the most recent decades has been upward.

A reflection of change in this regard over the past half century was the comment of Rick Santorum, a prominent example of a Christianist politician and a leading candidate for the Republican presidential nomination in 2012, that his fellow Catholic John Kennedy’s pledge to keep his religion out of the conduct of Kennedy’s presidency made Santorum “want to throw up.” The latest phase of increasing prominence of overt Christianists in American politics coincides with increasingly reflexive negative views about Islamist politicians elsewhere.

Another manifestation has been a series of more specific attacks on the establishment clause of the First Amendment, no one of which may be earthshaking but which collectively represent a substantial weakening of that foundation of American constitutionalism.

The attacks have included such things as proselytization at U.S. military academies, a Supreme Court decision (in the Hobby Lobby case) allowing one citizen’s private religious beliefs to govern the content of other citizens’ taxpayer-assisted medical care, and most recently defiance of that same Supreme Court on same-sex marriage by the chief justice of a state supreme court whose campaign to insert his religious beliefs into public affairs has included earlier defiance of a federal court order to remove a monument to the Ten Commandments that he had erected at a state courthouse.

A third indication of the trend, noticeable especially over the past decade and a half, has been increased Islamophobia, the overt rejection or distrust of an entire religion and not just of an extremist fringe. The sentiment has been pervasive in the private sector but repeatedly bleeds over into public and political space, as when Louisiana Gov. Bobby Jindal says that if American Muslims “want to set up their own culture and values, that’s not immigration, that’s really invasion.”

This entire pattern damages the effectiveness of U.S. foreign policy. It leads many foreigners to believe that U.S. actions are motivated by an objective of bashing one religion and advancing another, even if that is not their actual purpose. This belief leads to resentment and hatred of the United States and resistance against what it is trying to do.

This is why the current administration wisely eschews the term “Islamic terrorism,” notwithstanding all the baiting it gets from domestic opponents on this semantic point. It is also why the previous administration wisely backed off from calling its counterterrorist effort a “crusade,” as George W. Bush initially called it shortly after 9/11.

But such resistance and reactions to U.S. foreign policy initiatives don’t even constitute the most fundamental danger of going down the sectarian path. That danger has to do with how through the centuries religiously-defined and religiously-motivated conflict has been one of the biggest sources of organized bloodshed and human suffering.

We see such bloodshed and suffering in abundance today in the Middle East, South Asia, and parts of Africa. The West has mostly extracted itself from that type of agony, but did so only after the agony of the Thirty Years War led Europeans to erect a state system that banished to the past the idea that religious difference should be the basis for one state waging war against another state.

It would be disastrous for the United States to do anything that even hints at return to a pre-Westphalian mindset that unites sovereigns and scripture. Dionne notes that some secularists criticized President Obama’s remarks last week for having “soft-pedaled the theological roots of violence.” They have a point, but a speech at a prayer breakfast would not have been an appropriate occasion for lecturing on that broader lesson.

There are fundamental values at risk at home in the United States, too. Mr. Obama gave a nod in his speech to the Founding Fathers, and rightly so. Anyone with an interest in the Founders’ intent should pay attention to their intent regarding the importance of non-establishment of religion. George Washington said, “The United States is not a Christian nation, any more than it is a Jewish or Mohammedan nation.”

The Founders’ thinking on the subject was influenced both by the sordid history of religiously-driven conflict and by their awareness of how specific dominant religious identifications of some of the American colonies raised the risk of religious repression of those not part of the dominant sect. They saw non-establishment of religion by the state as critical to the preservation of religious freedom, one of the basic freedoms that are part of American values.

President Obama managed to hit the right notes for a prayer breakfast, speaking positively about religious faith, from the compassion of a spiritual leader such as the Dalai Lama to the role that prayer may have played for Darrell Waltrip as he was driving a race car 200 miles per hour. He also had an important message that should be heeded by anyone who makes any proposals about public policy that would involve the United States taking sides in favor of, or against, any particular religion.

Paul R. Pillar, in his 28 years at the Central Intelligence Agency, rose to be one of the agency’s top analysts. He is now a visiting professor at Georgetown University for security studies. (This article first appeared as a blog post at The National Interest’s Web site. Reprinted with author’s permission.)




The Right’s Dubious Claim to Madison

From the Archive: Central to the question of whether America’s Right is correct that the Constitution mandated a weak central government is the person of James Madison and what he and his then-fellow Federalists were doing at the Constitutional Convention in 1787, wrote Robert Parry in 2013.

By Robert Parry (Originally published on June 23, 2013)

By asserting a connection to America’s First Principles, the Tea Party is forcing a reexamination of the early years of the Republic and a reconsideration of what the Framers of the U.S. Constitution intended.

That debate may be useful even if the Tea Party’s chief motivation in provoking it is simply a “rebranding” that recognizes that the image of white people waving the “Stars and Bars” and seeking “states’ rights” to disenfranchise black and brown people has a negative connotation for many modern Americans.

So, to present a more palatable image, today’s Right has dialed back the time machine from 1860 to 1776, trading in the Confederate flag for the Revolutionary War-era Gadsden flag with its coiled snake and “Don’t Tread on Me” motto, except with the federal government replacing the British monarchy as the source of “tyranny.”

Substantively, however, nothing has changed in this rebranding. There’s the same animosity that the Confederates felt toward President Abraham Lincoln and the Union when the South’s beloved institution of slavery was threatened. Only now the neo-Confederates are expressing their hatred for President Barack Obama and the federal government for advocating programs like voting rights, immigration reform, food stamps and guaranteed health care that are viewed by the predominantly white Tea Party as disproportionately aiding racial and ethnic minorities.

But instead of referencing the precedent of the Confederacy’s secession from the Union in defense of “states’ rights” and slavery, the Tea Party and today’s Right are asserting that they simply want to restore the original vision of America’s Founding, which they insist is not much different from the argument that the Confederates were making in 1860.

To that end, the Right has invested heavily in “scholarship” that seeks to present the Framers as essentially pre-Confederates who believed strongly in “states’ rights” and wanted a weak central government. However, that “history,” in turn, requires slanting the evidence and kidnapping of one key Founder in particular.

Madison as Flip-Flopper

At the center of today’s ideological struggle over the Founding era is James Madison, a chief architect of the U.S. Constitution when he was essentially a protégé of George Washington in the 1780s. But Madison was also a practical politician who drifted in the 1790s and later into the orbit of his central Virginia neighbor, Thomas Jefferson, who led bitter fights against Washington’s Federalists and especially Alexander Hamilton.

This ambivalence of Madison as central to Washington’s vision of a strong central government yet his later realignment with Jefferson’s fierce loyalty to Virginia and its interests makes him a perfect candidate for the Right’s rewriting of the narrative surrounding the Constitution. The earlier Madison who sided with Washington on centralizing government power can be blurred with the later Madison who supported Jefferson in defending Virginia’s regional interests, particularly its investment in slavery.

In this regard, Andrew Burstein and Nancy Isenberg’s Madison and Jefferson offers some valuable insights into the history of the era and the political collaboration between these two important Founders. Unlike many histories that glorify Jefferson in particular, this book, published in 2010, provides a fairly objective assessment of the strengths and weaknesses of the two leaders.

Perhaps the authors’ most significant observation is that Jefferson and Madison must be understood as, first and foremost, politicians representing the interests of their constituencies in Virginia where the two men lived nearby each other on plantations worked by African-American slaves, Jefferson at Monticello and Madison at Montpelier.

“It is hard for most to think of Madison and Jefferson and admit that they were Virginians first, Americans second,” Burstein and Isenberg note. “But this fact seems beyond dispute. Virginians felt they had to act to protect the interests of the Old Dominion, or else, before long, they would become marginalized by a northern-dominated economy.

“Virginians who thought in terms of the profit to be reaped in land were often reluctant to invest in manufacturing enterprises. The real tragedy is that they chose to speculate in slaves rather than in textile factories and iron works. And so as Virginians tied their fortunes to the land, they failed to extricate themselves from a way of life that was limited in outlook and produced only resistance to economic development.”

Not only was Virginia’s agriculture tied to the institution of slavery but after the Constitution banned the importation of slaves in 1808, Virginia developed a new industry, the breeding of slaves for sale to new states forming in the west.

The Virginia Dynasty

In that way, the so-called Virginia Dynasty over the presidency that ran consecutively from Jefferson in 1801 through Madison starting in 1809 and James Monroe ending in 1825 defended the interests of the South’s slaveholders in part by constraining the role of the federal government in building the young nation’s industrial strength and its financial development.

It had been a fear among Southern politicians from the earliest days of American independence that a strong federal government would eventually eradicate slavery. So, it was a Southern imperative carried forward by the Virginia Dynasty to limit that power even though Madison had been instrumental in centralizing it.

While the Right likes to look at Madison as a constitutional purist who always favored tightly constrained federal powers, a more useful prism for seeing the historical Madison is that he shifted from the patronage of Washington, who despised the idea of state “sovereignty” after experiencing its inefficiency while commander-in-chief of the Continental Army, to the tutelage of the brilliant but mercurial Jefferson, who was wedded to the interests of Virginia.

Whereas Washington working with his protégés Madison and Hamilton had a national vision of a fast-developing country with states subordinate to the federal government, Jefferson could not move beyond his more parochial concept of Virginia and Southern states maintaining substantial freedom from a federal government that might seek to abolish slavery.

Under Washington’s wing in the years immediately after independence while Jefferson was serving as the U.S. representative to France Madison recognized the disaster of the Articles of Confederation, which set the rules for U.S. governance from 1777 to 1787. The Articles made the 13 states “sovereign” and “independent” and deemed the federal government simply a “league of friendship.” For instance, Madison shared Washington’s interest in placing the development of national commerce under the control of the federal government, but Madison’s initial Commerce Clause failed to win the support of the Virginia legislature.

The United States was also flailing in regards to maintaining domestic security with the Shays’ Rebellion rocking western Massachusetts in 1786-87 and the federal government too weak to help restore order. Washington feared that Great Britain would exploit the regional and social divisions of the new country and thus threaten its hard-won independence.

“Thirteen sovereignties,” Washington wrote, “pulling against each other, and all tugging at the federal head, will soon bring ruin to the whole.” [See Catherine Drinker Bowen’s Miracle at Philadelphia.]

Madison’s Federalism

Madison was of a similar mind. In 1781, as a member of the Congress under the Articles of Confederation, he introduced a radical amendment that “would have required states that ignored their federal responsibilities or refused to be bound by decisions of Congress to be compelled to do so by use of the army or navy or by the seizure of exported goods,” noted Chris DeRose in Founding Rivals. However, Madison’s plan opposed by the powerful states went nowhere.

Similarly, Madison lamented how the variety of currencies issued by the 13 states and the lack of uniform standards on weights and measures impeded trade. Again, he looked futilely toward finding federal solutions to these state problems.

So, after a decade of growing frustration and mounting crises under the Articles, a convention was called in Philadelphia in 1787 to modify them. Washington and Madison, however, had a bigger idea. They pressed instead to scrap the Articles altogether in favor of a new constitutional structure that would invest broad powers in the central government and remove language on state sovereignty and independence.

Madison told Washington that the states had to be made “subordinately useful,” a sentiment that Washington shared after seeing how states had failed to meet their financial obligations to his troops during the Revolution.

As Washington presided over the convention, it fell to Madison to supply the framework for the new system. Madison’s plan called for a strong central government with clear dominance over the states. Madison’s original plan even contained a provision to give Congress veto power over state decisions.

The broader point of the Constitutional Convention was that the United States must act as one nation, not a squabbling collection of states and regions. James Wilson from Pennsylvania reminded the delegates that “we must remember the language with which we began the Revolution: ‘Virginia is no more, Massachusetts is no more, Pennsylvania is no more. We are now one nation of brethren, we must bury all local interests and distinctions.’”

However, as the contentious convention wore on over the summer, Madison retreated from some of his more extreme positions. “Madison wanted the federal assembly to have a veto over the state assemblies,” wrote David Wootton, author of The Essential Federalist and Anti-Federalist Papers. “Vetoes, however, are bad politics, and again and again they had to be abandoned in the course of turning drafts into agreed texts.”

But Madison still pushed through a governing structure that bestowed important powers on the central government including the ability to tax, to print money, to control foreign policy, to conduct wars and to regulate interstate commerce.

Madison also came up with a plan for approving the Constitution that bypassed the state assemblies and instead called for special state conventions for ratification. He knew that if the Constitution went before the existing assemblies with the obvious diminution of their powers it wouldn’t stand a chance to win the approval of the necessary nine states.

Resistance to the Constitution

Still, the Constitution prompted fierce opposition from many prominent Americans who recognized how severely it reduced the powers of the states in favor of the central government. These Anti-Federalists decried the broad and sometimes vague language that shifted the country away from a confederation of independent states to a system that made the central government supreme.

What Madison and his cohorts had achieved in Philadelphia was not lost on these Anti-Federalists, including Pennsylvania delegates who had been on the losing side and who then explained their opposition in a lengthy report which declared: “We dissent because the powers vested in Congress by this constitution, must necessarily annihilate and absorb the legislative, executive, and judicial powers of the several states, and produce from their ruins one consolidated government.

“The new government will not be a confederacy of states, as it ought, but one consolidated government, founded upon the destruction of the several governments of the states. The powers of Congress under the new constitution, are complete and unlimited over the purse and the sword, and are perfectly independent of, and supreme over, the state governments; whose intervention in these great points is entirely destroyed.”

The Pennsylvania dissenters noted that the state sovereignty language from the Articles of Confederation was stripped out of the Constitution and that national sovereignty was implicitly transferred to “We the People of the United States” in the Preamble. They pointed out that the Constitution’s Article Six made federal statutes and treaties “the supreme law of the land.”

“The legislative power vested in Congress is so unlimited in its nature; may be so comprehensive and boundless [in] its exercise, that this alone would be amply sufficient to annihilate the state governments, and swallow them up in the grand vortex of general empire,” the Pennsylvania dissenters declared.

Some Anti-Federalists charged that the President of the United States would have the powers of a monarch and that the states would be reduced to little more than vassals of the central authority. Others mocked the trust that Madison placed in his schemes of “checks and balances,” that is, having the different branches of government block others from committing any grave abridgement of liberties.

Famed Revolutionary War orator Patrick Henry, one of the leading Anti-Federalists, denounced Madison’s scheme of countervailing powers as “specious imaginary balances, your rope-dancing, chain-rattling, ridiculous ideal checks and contrivances.” Henry and other opponents favored scrapping the new Constitution and calling a second convention.

Toward Ratification

Though the Anti-Federalists were surely hyperbolic in some of their rhetoric, they were substantially correct in identifying the Constitution as a bold assertion of federal power and a major transformation from the previous system of state independence.

For his part, Madison was not only the chief architect of this shift from state to national power, he even favored a clearer preference for federal dominance with his veto idea over actions by state assemblies, the proposal that died in the compromising at Philadelphia. However, Madison and other Federalists faced a more immediate political challenge in late 1787 and early 1788 securing ratification of the new Constitution in the face of potent opposition from the Anti-Federalists.

Despite Madison’s ploy of requiring special ratifying conventions in the various states, the Anti-Federalists appeared to hold the upper hand in key states, such as Virginia and New York. So, to defend the new Constitution, Madison joined with Alexander Hamilton and John Jay in anonymously composing the Federalist Papers, a series of essays which not only sought to explain what the Constitution would do but perhaps more importantly to rebut the accusations of the Anti-Federalists.

Indeed, the Federalist Papers are best understood not as the defining explanation of the Framers’ intent since the actual words of the Constitution (contrasted with the Articles of Confederation) and the debates in Philadelphia speak best to that but as an attempt to tamp down the political fury directed at the proposed new system.

Thus, when the Anti-Federalists thundered about the broad new powers granted the central government, Madison and his co-authors countered by playing down how radical the new system was and insisting that the changes were more tinkering with the old system than the total overhaul that they appeared to be.

That is the context which today’s Right misses when it cites Madison’s comments in Federalist Paper No. 45, entitled “The Alleged Danger From the Powers of the Union to the State Governments Considered,” in which Madison, using the pseudonym Publius, sought to minimize what the Constitution would do. He wrote:

“If the new Constitution be examined with accuracy, it will be found that the change which it proposes consists much less in the addition of NEW POWERS to the Union, than in the invigoration of its ORIGINAL POWERS.

“The regulation of commerce, it is true, is a new power; but that seems to be an addition which few oppose, and from which no apprehensions are entertained. The powers relating to war and peace, armies and fleets, treaties and finance, with the other more considerable powers, are all vested in the existing Congress by the Articles of Confederation. The proposed change does not enlarge these powers; it only substitutes a more effectual mode of administering them.”

Today’s Right trumpets this essay and especially Madison’s summation that “the powers delegated by the proposed Constitution to the federal government are few and defined. Those which are to remain in the State governments are numerous and indefinite” but the Right ignores what Madison was trying to accomplish with his essay. He was trying to defuse the opposition. After all, if Madison really thought the Articles only needed some modest reform, why would he have insisted on throwing them out altogether along with their language about state “sovereignty” and “independence”?

Power with Teeth

Nor was it entirely accurate for Madison to suggest that replacing the federal government’s toothless powers in the Articles with powers having real teeth in the Constitution was trivial. Under the Constitution, for instance, the printing of money became the exclusive purview of the federal government, not a minor change. Madison also was a touch disingenuous when he downplayed the importance of the Commerce Clause, which gave the central government control over interstate commerce. Madison understood how important that federal authority was.

To cite Madison as an opponent of an activist federal government, the Right must also ignore Federalist Paper No. 14 in which Madison envisioned major construction projects under the powers granted by the Commerce Clause. “[T]he union will be daily facilitated by new improvements,” Madison wrote. “Roads will everywhere be shortened, and kept in better order; accommodations for travelers will be multiplied and meliorated; an interior navigation on our eastern side will be opened throughout, or nearly throughout the whole extent of the Thirteen States.

“The communication between the western and Atlantic districts, and between different parts of each, will be rendered more and more easy by those numerous canals with which the beneficence of nature has intersected our country, and which art finds it so little difficult to connect and complete.”

What Madison is demonstrating in that essay is a core reality about what he, Washington and Hamilton were seeking. They were pragmatists seeking to build a strong and unified nation.

Yet, despite the prestige of George Washington and the propaganda of the Federalist Papers, Madison encountered intense opposition to ratification at the Virginia convention where fears of a federal abolition of slavery were raised, ironically, by two of the most famous voices for “liberty,” Patrick Henry and George Mason.

Henry and Mason have gone down in popular U.S. history as great espousers of freedom. Before the Revolution, Henry was quoted as declaring, “Give me liberty or give me death!” Mason is hailed as a leading force behind the Bill of Rights. But their notion of “liberty” and “rights” was always selective. Henry and Mason worried about protecting the “freedom” of plantation owners to possess other human beings as property.

The Virginia Convention

At Virginia’s Ratification Convention in June 1788, Henry and Mason raised several arguments against the proposed Constitution, but their hot-button appeal centered on the danger they foresaw regarding the abolition of slavery.

As historians Burstein and Isenberg wrote in Madison and Jefferson, Henry and Mason warned the plantation owners at the convention that “slavery, the source of Virginia’s tremendous wealth, lay politically unprotected.” At the center of this fear was the state’s loss of ultimate control over its militia which could be “federalized” by the President as the nation’s commander-in-chief under the proposed Constitution.

“Mason repeated what he had said during the Constitutional Convention: that the new government failed to provide for ‘domestic safety’ if there was no explicit protection for Virginians’ slave property,” Burstein and Isenberg wrote. “Henry called up the by-now-ingrained fear of slave insurrections the direct result, he believed, of Virginia’s loss of authority over its own militia.”

Henry floated conspiracy theories about possible subterfuges that the federal government might employ to deny Virginians and other Southerners the “liberty” to own African-Americans. Describing this fear-mongering, Burstein and Isenberg wrote:

“Congress, if it wished, could draft every slave into the military and liberate them at the end of their service. If troop quotas were determined by population, and Virginia had over 200,000 slaves, Congress might say: ‘Every black man must fight.’ For that matter, a northern-controlled Congress might tax slavery out of existence. Mason and Henry both ignored the fact that the Constitution protected slavery on the strength of the three-fifths clause, the fugitive slave clause, and the slave trade clause. Their rationale was that none of this mattered if the North should have its way.”

At Philadelphia in 1787, the drafters of the Constitution had already capitulated to the South’s insistence on its brutal institution of human enslavement. That surrender became the line of defense that Madison cited as he sought to finesse the arguments of Mason and Henry.

Burstein and Isenberg wrote, “Madison rose to reject their conspiratorial view. He argued that the central government had no power to order emancipation, and that Congress would never ‘alienate the affections five-thirteenths of the Union’ by stripping southerners of their property. ‘Such an idea never entered into any American breast,’ he said indignantly, ‘nor do I believe it ever will.’

“Madison was doing his best to make Henry and Mason sound like fear-mongers. Yet Mason struck a chord in his insistence that northerners could never understand slavery; and Henry roused the crowd with his refusal to trust ‘any man on earth’ with his rights. Virginians were hearing that their sovereignty was in jeopardy.”

Despite the success of Mason and Henry to play on the fears of plantation owners, the broader arguments stressing the advantages of Union carried the day, albeit narrowly. Virginia ultimately approved ratification by 89 to 79.

Return of Jefferson

With the return of Jefferson from France in 1789, the political physics of the young Republic began to change. Though Jefferson, the principal author of the Declaration of Independence, had offered little input into the development of the Constitution, he immediately grew concerned over how the Federalists around Washington and Hamilton sought to implement it, with ambitious projects for national development.

Jefferson, who served as Washington’s Secretary of State, and Hamilton, who was Treasury Secretary, represented the two poles of how the nation should proceed and their clashes were personal as well as ideological. The two men gave impetus to the emergence of “factions,” what Washington had feared as a great threat to the Republic.

Soon the lines were drawn between Jefferson’s Democratic-Republicans and Hamilton’s (and Washington’s) Federalists. In the middle was Madison who shocked Hamilton and Washington by essentially abandoning their side of the argument and aligning himself with Jefferson. In the Federalist view, the gravitational pull of Virginian politics had yanked Madison out of Washington’s orbit and moved him into Jefferson’s.

Madison, who had previously recognized the logical disconnect between the liberties of a Republic and the existence of slavery, soon fell silent on the issue. As Burstein and Isenberg note, 1791 was the last time Madison criticized slavery publicly: “That was when Madison prepared notes for a National Gazette essay, never published, in which he asserted that slavery and republicanism were incompatible.”

In effect, Jefferson began acting on the logic of the Henry-Mason argument, that a strong central government would eventually doom slavery. Thus, Jefferson opposed the Federalist project to deploy the empowered central government under the Constitution to build the nation, ideas like Hamilton’s national bank and even Madison’s road construction.

Jefferson proved to be an adept, even ruthless, politician as he secretly financed newspaper attacks on his Federalist rivals, such as John Adams, who succeeded Washington as the second president in 1797. Jefferson pushed Adams aside in 1801 to become the third president.

In doing so, Jefferson presented his ideology as an insistence that the Constitution be strictly interpreted to keep federal authority within its “enumerated powers.” Politically, he portrayed his movement as one defending simple “farmers,” but his true base of political support was the Southern slaveholding aristocracy.

Jefferson’s Racism

Jefferson’s racism, which included pseudo-science of skull measurements to prove the inferiority of African-Americans in his Notes on the State of Virginia, colored his administration’s foreign policy, too. He sided with French Emperor Napoleon’s scheme to crush the slave uprising in Haiti, a movement for black freedom that Jefferson feared would spread northward.

Ironically, the defeat of Napoleon’s army in Haiti forced the Emperor to forego the second phase of his plan, to expand his empire into the center of the North American continent. Instead, he offered to sell it to Jefferson in a deal negotiated by Secretary of State Madison. In buying the Louisiana territories, Jefferson and Madison ignored the principle of the Constitution’s “enumerated powers” which didn’t say anything about buying land that doubled the size of the country.

Similarly, as the fourth president, Madison’s stumbling performance in the War of 1812 changed his mind about the value of a national bank as a necessity for financing an effective military force.

Yet, while showing flexibility on their governing principles while in office, Jefferson and Madison hardened in defense of Virginia’s industry of slavery. Though both recognized the principled case against slavery, their political and financial interests overcame any moral qualms that they may have had.

After their presidencies, Jefferson and Madison remained loyal to their neighbors, the slaveholders of Virginia who as a group had discovered a lucrative new industry, breeding slaves for sale to the new states emerging in the west. Jefferson himself saw the financial benefit of having fertile female slaves.

“I consider a woman who brings a child every two years as more profitable than the best man of the farm,” Jefferson remarked. “What she produces is an addition to the capital, while his labors disappear in mere consumption.”

While recognizing the economic value of slavery, Jefferson suggested that the ultimate resolution of slavery would be to expatriate black Americans out of the country. One of Jefferson’s ideas was to take away the children born to black slaves in the U.S. and ship them to Haiti. In that way, Jefferson posited that both slavery and America’s black population could be phased out.

Slaveholders as Victims

Jefferson and Madison also insisted on framing the slavery issue as one in which the white Southerners who owned slaves were the real victims. In 1820, Jefferson wrote a letter expressing his alarm over the bitter battle surrounding the admission of Missouri as a slave state. “As it is, we have the wolf by the ear and we can neither hold him, nor safely let him go,” Jefferson wrote. The imagery sought sympathy for the Southern slaveholders as the ones caught in a dangerous predicament, tenuously holding onto a ravenous wolf.

After returning to his Virginia plantation, Madison expressed his own sympathy for the slave-owning South in a play that he wrote, entitled “Jonathan Bull and Mary Bull.” The plot involved the wife Mary having one black arm, which husband Jonathan had accepted at the time of their marriage but later found offensive. He demanded that Mary either have her skin peeled off or her arm cut off.

In Madison’s script, Jonathan Bull becomes obnoxious and insistent even though his remedy is cruel and even life-threatening. “I can no longer consort with one marked with such a deformity as the blot on your person,” Jonathan tells Mary, who is “so stunned by the language she heard that it was some time before she could speak at all.”

Madison’s play clumsily made the belligerent and cruel Jonathan represent the North and the sympathetic and threatened Mary the South. As historians Burstein and Isenberg note, “Madison’s refusal to acknowledge the North’s right to speak out against southern slavery is matched by his feminization of the South, vulnerable if not wholly innocent and routinely subjected to unwarranted pressure.”

In other words, Madison considered the South’s white slaveholders the real victims here, and the North’s abolitionists were unfeeling monsters.

Late in his life, Jefferson was confronted on the moral and intellectual contradiction between his soaring “all men are created equal” rhetoric and his prosaic defense of slavery. The French patriot, the Marquis de Lafayette, who had fought at Washington’s side against the British and who became an advocate for emancipation in 1788, challenged his old friend Jefferson during a tour of the country that Lafayette had helped forge.

In 1820, Lafayette “pressed Jefferson to become again the activist [for liberty] he had been when they first met.” Lafayette told Jefferson that “I find, in the Negro Slavery, a Great draw Back Upon My Enjoyments” from the success of American independence, as Burstein and Isenberg note.

But Lafayette’s pain over the continuation and even expansion of slavery in the United States did not move Jefferson to reconsider his position. Unlike Washington and some other Founders whose wills freed their slaves, Jefferson (who died in 1826) and Madison (who died in 1836) did not grant any blanket freedom. Madison freed none of his slaves; Jefferson only freed a few related to the Hemings family of which his purported mistress, Sally Hemings, was a member.

Heading to War

Jefferson and Madison (at least the later incarnation of Madison as Jefferson’s ally) also helped put the nation on the path to the Civil War by lending support to the “nullification” movement in which Southern states insisted that they could reject (or nullify) federal law, the opposite position from the one Madison took in the Constitutional Convention when he favored giving Congress the power to veto state laws.

In the early 1830s, Southern politicians sought “nullification” of a federal tariff on manufactured goods, but were stopped by President Andrew Jackson who threatened to deploy troops to South Carolina to enforce the Constitution.

In December 1832, Jackson denounced the “nullifiers” and declared “the power to annul a law of the United States, assumed by one State, incompatible with the existence of the Union, contradicted expressly by the letter of the Constitution, unauthorized by its spirit, inconsistent with every principle on which it was founded, and destructive of the great object for which it was formed.”

Jackson also rejected as “treason” the notion that states could secede if they wished, noting that the Constitution “forms a government not a league,” a reference to a line in the Articles of Confederation that had termed the fledgling United States a “league of friendship” among the states, not a national government.

Jackson’s nullification crisis was resolved nonviolently, but the South continued to resist any application of federal authority, even when the government sought to provide disaster relief, out of fear that such efforts could become a legal precedent for abolishing slavery.

Finally, in 1860, with the election of Abraham Lincoln from the new anti-slavery Republican Party, Southern states seceded from the Union and formed the Confederacy which explicitly authorized the institution of slavery in perpetuity. It took the Union’s victory in the Civil War to free the slaves and to make African-Americans full citizens of the United States. However, the defeated South still balked at equal rights for blacks and invoked “states’ rights” to defend segregation during the Jim Crow era.

White Southerners amassed enough political clout, especially within the Democratic Party the successor to Jefferson’s Democratic-Republican Party to fend off civil rights for blacks. The battle over states’ rights was joined again in the 1950s when the federal government finally committed itself to enforcing the principle of “equal protection under the law” as prescribed by the Fourteenth Amendment.

Many white Southerners were furious that their system of segregation was being dismantled by federal authority. Southern rightists and many libertarians insisted that federal laws prohibiting denial of voting rights for blacks and outlawing segregation in public places were unconstitutional. But federal courts ruled that Congress was within its rights in banning such discrimination within the states.

The Modern Right

The anger of Southern whites was taken out primarily on the Democratic Party, which had led the fight for civil rights. Opportunistic Republicans, such as Richard Nixon, fashioned a “Southern strategy” that deployed racial code words to appeal to Southern whites. Soon, the region flipped from solidly Democratic to predominantly Republican as it is today.

Southern white anger was also reflected in the prevalence of the Confederate battle flag on pickup trucks and in store windows. But direct appeals to racism became politically unpalatable in modern America, so today’s Right began its rebranding. From a movement that resented federal intervention on behalf of blacks and other minorities, the Right became a movement that decried federal intervention as a violation of fundamental American “liberties.”

Still, the rebranding was only cosmetic. Today’s Tea Party wants much the same thing and is motivated by many of the same fears as the generations of pre-Confederates, Confederates, post-Confederates and neo-Confederates. They all want to maintain white supremacy, and they resent the federal government’s insistence that blacks (and brown) people be treated as full citizens.

Thus, you see the Tea Party’s aggressive support for state laws restricting voting rights (especially for minorities) and the Tea Party’s furious opposition to immigration reform that would give millions of Hispanics a pathway to citizenship. Plus, it was the election of the first African-American president that created the impetus for the Tea Party’s emergence in the first place, amid calls from whites to “take our country back” and slurs about Barack Obama being born in Kenya.

But the overriding historical question raised by the Tea Party’s insistence that it represents the founding ideals of the United States is whether the nation embraces the intent of Washington (and the earlier incarnation of Madison) for a strong central government seeking the public good or the resistance to the Constitution that was pushed by slave-owning Virginians, such as Jefferson (and the later incarnation of Madison).

The former interpretation sought to deploy the federal government on behalf of fulfilling the goals of the Constitution’s Preamble, including the need to “promote the general welfare.” The latter interpretation saw an activist federal government as a death knell to slavery.

Today’s Tea Party may wish to pretend that its overwhelmingly white membership dressing up in Revolutionary War costumes separates it from the image of angry white segregationists wearing white sheets, waving the Stars and Bars and spitting on black children on their way to school. But the Tea Party’s opinion of the Constitution and the interpretation that embraced slavery, secession and segregation are one and the same.

Investigative reporter Robert Parry broke many of the Iran-Contra stories for The Associated Press and Newsweek in the 1980s. You can buy his latest book, America’s Stolen Narrative, either in print here or as an e-book (from Amazon and barnesandnoble.com). For a limited time, you also can order Robert Parry’s trilogy on the Bush Family and its connections to various right-wing operatives for only $34. The trilogy includes America’s Stolen Narrative. For details on this offer, click here.




Journalism and Reality

From Editor Robert Parry: One thing that I’ve learned from my four-plus decades in journalism is that many people only like reporting that reinforces what they already believe. Facts that go off in a different direction can make them angry and they are usually not hesitant to express their anger.

For instance, in the 1980s, when I was covering the Nicaraguan Contra rebels for the Associated Press, many readers of AP copy, including some of my editors, shared Ronald Reagan’s enthusiasm for these “freedom fighters” whom Reagan likened to America’s Founding Fathers.

So, when I discovered the Contras engaging in a variety of criminal activity, from extrajudicial killings, rapes, torture and drug trafficking, my reporting was unwelcome both inside and outside the AP (and later I encountered the same hostility at Newsweek). The usual response was to challenge my journalism and to pretend that the ugly reality wasn’t the reality.

You might say that that’s just the life of a journalist. Get over it. And you’d have a point. But the larger problem is that this trend toward what you might call “selective narrative” appears to be accelerating. Ideologues and partisans don’t just make arguments for their causes, they create overarching narratives to validate their causes.

And the more money and the more media that a group has the more effective it is in imposing its narrative on the broader unsuspecting (and often ill-informed) public.

In the Contra example, many Americans believed in President Reagan and thus were open to the pro-Contra narrative that Reagan’s team skillfully deployed. Information that ran counter to the propaganda of “white hat” Contras fighting “black hat” Sandinistas was seen as discordant and needed to be stamped out along with anyone associated with it.

In 1996, when San Jose Mercury News reporter Gary Webb called to ask me about my Contra-cocaine experience (before he published his “Dark Alliance” series), it was this hostility toward any criticism of the Contras that I warned him about as he contemplated reviving the scandal.

Tragically, my concerns based on my own experience were well-founded. Not only the CIA and government spokesmen went after Webb’s story but virtually all the major news organizations (which had ignored or disparaged the scandal in the 1980s). These events are recounted in the new movie, “Kill the Messenger.” [Also, see Consortiumnews.com’s “WPost’s Slimy Attack on Gary Webb.”]

But a similar pattern holds true in other cases of presenting facts that conflict with what some people choose to believe. I have seen this both in challenging mainstream “conventional wisdom” and out-on-the-fringe “conspiracy theories.” Many people only want their preconceptions reinforced; they don’t want to rethink them.

False Founding Narrative

Most recently, I have encountered this phenomenon in pointing out fallacies in the right-wing (and sometimes left-wing) Founding Narrative, which presents the Framers of the Constitution in anti-historical ways in order to validate policies being promoted for the present, i.e., to make it appear that some modern position was shared by the Framers.

So, on the radical Left and Libertarian/Tea Party Right, you might get the depiction of the Framers as government-hating revolutionaries who wanted a heavily armed population prepared to kill representatives of an oppressive political system. It has also become an article of faith in some circles that the authors of the Constitution favored strong states’ rights and hated the notion of a strong central government.

Yet, that is simply not the history. The principal Framers of the Constitution were a group known as the Federalists. Led by General George Washington and his able acolytes James Madison and Alexander Hamilton, the Federalists despised the system of states’ rights contained in the Articles of Confederation and they assembled in Philadelphia in 1787, in part, out of alarm over the Shays Rebellion in western Massachusetts, which some of Washington’s former Revolutionary War commanders had just put down.

The Federalists devised as strong a central government as they could possibly get through to ratification. Madison even favored greater federal dominance by giving the U.S. Congress veto power over all state laws, a proposal that was watered down although federal law was still made supreme.

In other words, the Constitution’s Framers wanted to stabilize the young country, protect its fragile independence and rely on a strong central government to build its future. That is the history, albeit an inconvenient history for many folks these days who are selling the American people on a false Founding Narrative.

So, when I point out these facts, there is an angry backlash. I’m accused of being a “statist” or “just a journalist,” not a historian whatever’s necessary to protect the false narrative. Instead of simply arguing their case for a smaller government or a heavily armed population or whatever on the merits, these people get angry because their historical references have been debunked.

Perhaps it’s naive to think that ideologues and partisans will ever surrender what is a useful argument, no matter how false it is. But there should be some honesty in political debate and some respect for the actual facts and the real history.

Robert Parry is a longtime investigative reporter who broke many of the Iran-Contra stories for the Associated Press and Newsweek in the 1980s. He founded Consortiumnews.com in 1995 to create an outlet for well-reported journalism that was being squeezed out of an increasingly trivialized U.S. news media.




The Right’s Tenth Amendment Myth

Exclusive: Millions of Americans have been deceived into a false understanding of what the Constitution’s Framers intended because of a right-wing lie about the significance of the insignificant Tenth Amendment, reports Robert Parry.

By Robert Parry

A central part of the American Right’s false Founding Narrative is that the Tenth Amendment trumps the Constitution’s creation of a powerful central government that possesses a mandate to do what’s necessary to provide for the country’s “general Welfare.” In Right-Wing World, the Tenth Amendment gives nearly all powers to the states.

Yet, the reality is that the Tenth Amendment is one of the most meaningless of all the amendments to the U.S. Constitution, except maybe the Eighteenth, which prohibited the sale of liquor and was subsequently repealed by the Twenty-first Amendment.

Indeed, the Tenth Amendment read in the context of the broad powers that the Federalist authors of the Constitution gave to the central government carries almost no weight at all. It says: “The powers not delegated to the United States by the Constitution, nor prohibited by the States, are reserved to the States respectively or to the people.”

But the relevant point is that the Constitution granted nearly unlimited power to the U.S. Congress to enact legislation on behalf of “the general Welfare” within the context of republican governance, with the approval of the U.S. president, and with the sign-off of the U.S. Supreme Court.

This concept — embraced by James Madison, Alexander Hamilton, George Washington and other Framers — was to rely on the Constitution’s intricate checks and balances to prevent government overreach, not to hamstring the people’s elected representatives from doing what was necessary to build the nation both then and in the future.

This reality of what was done in Philadelphia in 1787 was not lost on either supporters or opponents of the Constitution. The so-called Anti-Federalists were shocked that the Federalists had, in effect, hijacked the Constitutional Convention away from its original goal of amending the Articles of Confederation, which made the states “sovereign” and “independent” and left the central government as merely a “firm league of friendship.”

But General George Washington, in particular, despised the concept of states’ rights, since he had seen his Continental Army go without pay and supplies to nearly starve during the Revolutionary War. He was joined in this sentiment by his bright protégé Madison and his old wartime aide-de-camp Hamilton.

So, the Constitutional Convention tossed out the Articles of Confederation and proposed a new structure making “We the People of the United States” the nation’s new sovereign and relegating the states to an inferior status, what Madison called “subordinately useful.”

Angry People

I realize that this reality or my pointing it out makes some people angry. They want to believe that their hatred of the federal government matched what the Framers felt. And the Right has done a remarkable job in propagandizing a large segment of the U.S. population into believing this invented narrative.

Some right-wing believers even insist that any action by the U.S. government to provide for “the general Welfare” is “unconstitutional,” such as the Affordable Care Act which addressed what was an undeniable threat to “the general Welfare,” the fact that tens of millions of Americans were forced to live in fear of premature death because they could not afford health insurance.

But the Framers’ mandate to provide for “the general Welfare” was not some mistake or afterthought. It is included both in the famous Preamble and in Article One, Section Eight, which delineates the so-called “enumerated powers.” There, the Constitution states “That Congress shall have Power To provide for the common Defense and general Welfare of the United States,” with the only stated restriction that “all Duties, Imposts and Excises shall be uniform throughout the United States.”

Article One, Section Eight further grants Congress the power “To make all Laws which shall be necessary and proper for carrying into Execution the foregoing Powers, and all other Powers vested by this Constitution in the Government of the United States, or in any Department or Officer thereof.”

Put together, as Alexander Hamilton and other Federalists noted, the Constitution empowered Congress to do what was needed to protect and build the new nation. As historian Jada Thacker wrote, “these clauses restated in the vernacular flatly announce that ‘Congress can make any law it feels is necessary to provide for whatever it considers the general welfare of the country.’”

And that was not just the view of the Federalists back then or some historian today. It was why the enemies of the Constitution fought so hard to block its ratification in 1788. For instance, New Yorker Robert Yates, who walked out of the convention in protest, wrote a month after the Constitution had been completed:

“This government is to possess absolute and uncontrollable power, legislative, executive and judicial, with respect to every object to which it extends. The government then, so far as it extends, is a complete one. It has the authority to make laws which will affect the lives, the liberty, and the property of every man in the United States; nor can the constitution or the laws of any state, in any way prevent or impede the full and complete execution of every power given.”

Madison, then a staunch Federalist, had favored giving even more power to Congress and making the states even more subordinate. “Madison wanted the federal assembly to have a veto over the state assemblies,” wrote David Wootton, author of The Essential Federalist and Anti-Federalist Papers. But Madison’s veto idea was jettisoned in favor of giving the federal courts the power to judge whether state laws violated the Constitution.

Fighting the Constitution

Despite these few concessions, the Constitution emerged from the secret meetings in Philadelphia as a stunning assertion of federal power. Anti-Federalists immediately recognized what had happened and rallied strong opposition to the new governing framework.

As dissidents from the Pennsylvania delegation wrote: “We dissent because the powers vested in Congress by this constitution, must necessarily annihilate and absorb the legislative, executive, and judicial powers of the several states, and produce from their ruins one consolidated government.” [See Consortiumnews.com’s “The Right’s Inside-Out Constitution.”]

The Constitution’s broad powers were particularly alarming to southern slaveholders because of the prospect that the North would eventually gain economic and political supremacy and push through anti-slavery legislation that would wipe out the South’s vast investment in human chattel and thus destroy the region’s plantation aristocracy.

Virginia’s Patrick Henry and George Mason made this argument most aggressively to Virginia’s ratifying convention, with Henry warning the Commonwealth’s slave owners that if they approved the new governing structure, “they’ll free your niggers!”

Faced with these alarms about federal powers, Madison agreed to propose some limiting amendments though he felt that a Bill of Rights was superfluous. Nevertheless, some of the first ten amendments did specifically restrict Congress’s power.

For instance, the First Amendment begins with the phrase “Congress shall make no law” while other amendments assert specific rights of citizens. The Tenth Amendment, however, simply states that powers not granted to the national government by the Constitution remain with the people and states.

Thus, the scope of the Tenth Amendment is entirely dependent on what preceded it, i.e., the nearly unlimited powers that the Constitution granted to the national government. In other words, if the Framers declared as they did that Congress could enact any law that it deemed necessary to promote “the general Welfare” and that federal law would be supreme, then the Tenth Amendment meant almost nothing since there were few powers left over for the states. It was a sop to the Anti-Federalists.

Still, the Constitution’s opponents especially slave owners in Virginia did not just surrender after ratification. Instead, they devised a clever strategy for preventing the possibility that Congress would wipe out their massive capital investment in slavery.

Behind the charismatic Thomas Jefferson, who was in Paris in 1787 and thus did not participate in the Constitutional Convention, the plantation aristocracy simply pretended that the Constitution didn’t mean what it said.

Jefferson’s Wordsmithing

Jefferson, one of Virginia’s biggest slaveholders and a masterful wordsmith, promulgated the absurd notion of “strict construction,” which meant that only specific powers mentioned in Article One, Section Eight could be exercised by Congress. Regarding domestic policy, that meant such relatively narrow powers as coining money, setting up post offices, establishing rules for nationalization, regulating interstate commerce, etc.

Jefferson’s “strict construction” was absurd because it ignored the obvious intent of the Framers and the need for the United States to act in ways that could not be specifically anticipated in 1787, a reality that confronted Jefferson himself after he was elected president in 1800.

Three years later, President Jefferson had the opportunity to buy the Louisiana Territories from France but there was no wording in Article One, Section Eight about expanding the size of the United States. Clearly, the Framers had enacted elastic phrasing for just such an eventuality but Jefferson had insisted on his crazy “strict construction” argument.

So, what did Jefferson do? He simply ignored his previous “principle” and implicitly accepted the Federalist interpretation of the Constitution, which they had principally authored. Congress approved the purchase of the Louisiana Territories doubling the size of the United States and giving Jefferson what is regarded as his greatest accomplishment as president.

Though even Jefferson the inventor of “strict construction” chose to repudiate his own argument, this insidious notion has survived the past two centuries in the fetid swamps of Right-Wing World.

It was a factor in the South’s resistance to anti-slavery restrictions that preceded the Civil War and it has been touted in modern times by such right-wing luminaries as Supreme Court Justice Antonin Scalia as part of his self-serving “originalism,” i.e., whatever Scalia wants done must have been what the Framers wanted done.

The real history of the Constitution has little impact on these ideologues. They have simply found it useful to wrap themselves in the cloaks of the Framers even when that requires distorting what the actual Framers intended.

While there can be legitimate arguments about the proper size and scope of the federal government (or for that matter any government), the facts should be the facts and the history should be the history. The Right, however, has deceived millions of Americans into believing a false narrative about the U.S. Constitution and the nation’s Founding for the purpose of distorting the debate.

[For more on this history, see Consortiumnews.com’s “The Right’s Dubious Claim to Madison” and “Thomas Jefferson: America’s Founding Sociopath.”]

Investigative reporter Robert Parry broke many of the Iran-Contra stories for The Associated Press and Newsweek in the 1980s. You can buy his new book, America’s Stolen Narrative, either in print here or as an e-book (from Amazon and barnesandnoble.com). For a limited time, you also can order Robert Parry’s trilogy on the Bush Family and its connections to various right-wing operatives for only $34. The trilogy includes America’s Stolen Narrative. For details on this offer, click here.