July/August 2025 | Washington Monthly https://washingtonmonthly.com/magazine/july-august-2025/ Tue, 30 Sep 2025 00:12:02 +0000 en-US hourly 1 https://washingtonmonthly.com/wp-content/uploads/2016/06/cropped-WMlogo-32x32.jpg July/August 2025 | Washington Monthly https://washingtonmonthly.com/magazine/july-august-2025/ 32 32 200884816 Rules to Win By https://washingtonmonthly.com/2025/06/01/rules-to-win-by/ Sun, 01 Jun 2025 23:05:00 +0000 https://washingtonmonthly.com/?p=159215

The July/August issue is here.

The post Rules to Win By appeared first on Washington Monthly.

]]>

Democrats are putting a lot of their political chips on the message that Donald Trump’s radical tariff regime will raise consumer prices, tank the economy, and alienate voters. So far, that’s looking like a pretty good wager. But it’s also a risky one. If Trump continues to ease and modify his tariff policies, it’s possible that the economy won’t fall into recession and inflation won’t be at intolerable levels by the time the 2026 midterms roll around. It’s even conceivable that by November 2028, enough manufacturing will have been reshored (whether because of tariffs or not) that Trump and his designated successor—presuming there is one—will be able to declare victory.

Even if that scenario seems unlikely, there is an aspect to Trump’s tariff pitch that Democrats ignore at their peril: It addresses a deep yearning among middle- and working-class Americans, especially those living outside large metro areas. For decades, these folks have watched their communities fall further and further into economic and civic decline, even as big coastal cities have boomed. It’s hard to exaggerate their fury at this inequity, their desperation to reverse it, and their gratitude to Trump for, in their eyes, at least trying to do something about it. These are precisely the voters the Democratic Party keeps losing and most needs to win back. And right now, Democrats don’t have a convincing alternative economic vision to Trump’s.

Arguably, they did. It was called Bidenomics. It consisted of massive fiscal stimulus, aggressive antitrust enforcement, and an industrial policy of targeted tariffs and federal support for key industries such as alternative energy and microchips, much of it going to red states and districts. I could make the case—and have in these pages—that this agenda was quite successful (the reshoring Trump promises was already happening under Biden). But in 2024, enough voters disagreed that we now have President Trump. Part of the reason was that voters felt more punished by inflation than helped by the policies. Part of it was that Biden was too old to run, much less to articulate his economic agenda. But it was also because Bidenomics was a half-thought-through hodgepodge of policies and programs that, while connected, lacked a unifying theory—or at any rate one that even the most eloquent younger members of his administration, like Jake Sullivan and Pete Buttigieg, could explain in under half an hour.

The advantage of Trump’s agenda is its simplicity: A single policy—high tariffs—will solve all our problems. Tariffs will punish the countries that stole our jobs, bring those jobs back to America, and raise enough revenue to pay for massive tax cuts. The disadvantage of Trump’s agenda is that it is simplistic, as even his own base understands. Nearly two-thirds of Republicans believe that Trump’s tariffs will raise prices for consumer goods, according to an April Gallup poll, and 25 percent oppose them.

What Democrats need is a vision for reviving the American dream that is more convincing than Trump’s, but more robust and understandable than Biden’s. In this issue of the Washington Monthly, senior editor Phillip Longman offers one: regulated competition. The term was coined by the political scientist and economic historian Gerald Berk to describe the sophisticated set of federal market rules that steered the course of the nation’s economy from the late 19th century to the 1970s, before it was dismantled in the era of deregulation. These rules set the terms of competition in much of the economy, from banking to transportation to energy. “Working with industry,” writes Longman, “federal lawmakers and regulators hashed out rules that determined who could enter and exit different key sectors, what terms of service they could impose, and with whom they could merge.”

To modern ears trained in the language of libertarianism, this sounds benighted. But it was precisely the system that built the United States into a capitalist superpower while delivering broad-based prosperity. It did so, Longman explains, by catalyzing a virtuous cycle of innovation:

Firms in key industry sectors like transportation and electricity were guaranteed modest but predictable profits that allowed them to attract more capital, and to take greater risks, than they otherwise could. In exchange, companies were obliged to serve all market segments, rather than cherry-pick the most profitable. This enabled smaller cities, towns, and rural areas to compete on a more equal footing with large cities on the coasts, thus spreading economic development and wealth creation more equitably across the country while also serving as a check on the growth of financiers and oligarchy. 

With the Trump administration and the Supreme Court decimating regulatory agencies, and “abundance liberals” blaming excessive red tape for many of the nation’s ills, the idea that Democrats should rally around an agenda of economic reregulation might sound crazy. Yet a decade ago, almost no one (aside from Washington Monthly writers) was talking about reviving antitrust enforcement. Now it has a foothold in both parties. And tariffs were considered retrograde. Now they’re the pillar of American economic policy. As Barry Lynn explains elsewhere in this issue, Democrats stand to reap a political windfall if they can learn to connect policies like these to language that speaks to the American working class’s hunger for economic liberty. There’s no reason why regulated competition (or whatever the political messaging pros decide to call it) can’t go from the forgotten secret of America’s past economic success to the blueprint for its future prosperity.  

—Paul Glastris, Editor-in-Chief

Inside the issue:

COVER:
The Secret to Reindustrializing America Is Not Tax Cuts and Tariffs. It’s Regulated Competition
From airlines to energy, shipbuilding to railroads, America became a capitalist superpower in the 20th century based on careful market rules. It can do so again.
by Phillip Longman

Resurrecting the Rebel Alliance
To end the age of Trump, Democrats must relearn the language and levers of power.
by Barry Lynn

ON POLITICAL BOOKS:
God and Man at Sea
William F. Buckley spent a lifetime trying to make a coherent intellectual case for conservatism but could never articulate what it was supposed to consist of apart from owning the libs.
by Jacob Heilbrunn

What Hungary Lost When It Obeyed in Advance
The country’s leaders thought they could restore the nation’s lost glory through alliance with Hitler. By 1945, Budapest lay in ruins and 550,000 Hungarian Jews were dead.
by Christian Caryl

The Supreme Court’s Immunity-to-Impunity Pipeline
Grievance dressed as law, history warped into license, today’s Court is not checking Trump’s authoritarianism—it’s codifying it.
by George Thomas

Clever the Twain Shall Meet
An epic new biography of Samuel Clemens confirms the Missourian’s literary mastery but contends that the most important character he ever created was his own.
by Sara Bhatia

FEATURES:
Donald Trump Is Following the Sam Brownback Playbook
The former Kansas governor’s radical economic agenda undermined the state’s prosperity, decimated vital government services, tanked his popularity, and put a Democrat in power. Could the same fate await the current president?
by Nate Weisberg

How Reproductive Freedom Advocates Outsmarted the Anti-Abortion Movement
Since the reversal of Roe v. Wade, the number of abortions is up because of telehealth and the free sharing of mifepristone and misoprostol.
by Carrie N. Baker

Unscrambling the Price of Eggs
Yes, bird flu is a factor, but so is greedflation.
by Claire Kelloway

Who Is Batya Ungar-Sargon?
A Berkeley-educated leftist who couldn’t bear drinking at a bar with Trump voters is now MAGA’s top defender. A tale for our times.
by Nate Weisberg

The Next Frontier of Plutocracy
A little-noticed FEC ruling has opened the door for mega-donors to give nearly unlimited amounts of money directly to political campaigns. Elections will never be the same.
by Bill Scher

Lebanon’s Precarious Future
The collapse of Hezbollah’s dominance has left a power vacuum in Lebanon—and a rare opening to reimagine the state. But without reform, aid, disarmament, and a rethinking of the centralized system of governance, the country risks falling back into the abyss.
by Hicham Bou Nassif

In Defense of Everything-Bagel Liberalism
Critics warned that the Biden administration put so many conditions on the grants it offered to semiconductor manufacturers that the centerpiece of its industrial policy would fail. Those conditions turned out to be key to the program’s success.
by Joel Dodge

The post Rules to Win By appeared first on Washington Monthly.

]]>
159215
The Secret to Reindustrializing America Is Not Tax Cuts and Tariffs. It’s Regulated Competition. https://washingtonmonthly.com/2025/06/01/the-secret-to-reindustrializing-america-is-not-tax-cuts-and-tariffs-its-regulated-competition/ Sun, 01 Jun 2025 23:00:00 +0000 https://washingtonmonthly.com/?p=159225

From airlines to energy, shipbuilding to railroads, America became a capitalist superpower in the 20th century based on careful market rules. It can do so again.

The post The Secret to Reindustrializing America Is Not Tax Cuts and Tariffs. It’s Regulated Competition. appeared first on Washington Monthly.

]]>

Republicans and Democrats now generally agree that we must make more stuff in America but no consensus exists about how to do that. Under President Joe Biden, the strategy was to offer subsidies to key industries, like microchip manufacturers, and then use targeted tariffs to protect those efforts. Under President Donald Trump, the plan apparently is to impose, or threaten to impose, high tariffs on a shifting set of nations and products while threatening to cut Biden’s targeted financial incentives and replace them with across-the-board tax cuts, mostly for the well-to-do.

Considering how central the goal of reindustrialization is to both parties, it’s noteworthy that the range of policy levers being debated is by and large limited to just three: tariffs, the tax code, and direct public investment. Yet while these can be useful tools, they are hardly the only ones, or even the most powerful. Indeed, historically, fostering America’s industrial strength depended far more on deploying regulations to steer market behavior. 

When Americans hear the word regulation they tend to think of the environmental and consumer protection measures put in place by federal agencies mostly since the 1970s. But for a century before that, a huge body of regulation of a different kind steered the course of the nation’s economic development. It was regulation that set market rules of competition. Which kinds of banks could operate where and how much interest could they charge or pay? What rates could railroads or airlines set for transporting various types of cargo or passengers over different distances? How much profit could investors in electric utilities or telecommunications companies make, and what customers were they required to serve and at what prices? Working with industry, federal lawmakers and regulators hashed out rules that determined who could enter and exit different key sectors, what terms of service they could impose, and with whom they could merge.

During America’s century-long rise as a capitalist superpower, such market rules fit together to form an increasingly sophisticated and pervasive system that the political scientist and economic historian Gerald Berk has dubbed “regulated competition.” It was a uniquely American system for governing industrial capitalism, and it delivered broad prosperity for decades. It did so first by catalyzing a virtuous cycle of innovation. Firms in key industry sectors like transportation and electricity were guaranteed modest but predictable profits that allowed them to attract more capital, and to take greater risks, than they otherwise could. In exchange, companies were obliged to serve all market segments, rather than cherry-pick the most profitable. This enabled smaller cities, towns, and rural areas to compete on a more equal footing with large cities on the coasts, thus spreading economic development and wealth creation more equitably across the country while also serving as a check on the growth of financiers and oligarchy. But then, beginning in the 1970s, policy makers from both parties largely dismantled this well-calibrated system of political economy in a rush to “deregulate” the economy and unleash “the market.” 

An especially vivid example of how America’s system of regulated competition once worked is aviation. This essay tells the story of how careful federal marketplace rules fostered the growth of air travel, domestic airplane manufacturing, and commerce in smaller cities across America—and how the demise of that system eroded all three. The same story could be told of other crucial industries, from finance to retail to shipbuilding. Washington’s abandonment of regulated competition explains much of what’s gone wrong with the American economy over the past 40 years, and its restoration could be the key to the country’s industrial revival. 

To understand how smart market regulation spurred innovation and equitable growth in aviation it’s useful to begin with the way the country dealt with the previous big revolution in transportation technology. Throughout the late 19th and early 20th centuries, policy makers had wrestled with what had come to be known as “the railroad problem,” a seemingly intractable dilemma involving a vicious mix of monopoly and ruinous competition. Early railroads held local monopolies in many places, which they used to extract wealth from the local community and retard its economic development. Yet early railroads also often faced cutthroat competition in many other markets due to duplicative lines and other forms of excess capacity. As a result, railroad owners repeatedly engaged in self-destructive rate wars, often moving freight and passengers at below cost in a desperate attempt to help defray their high fixed costs. Largely because of the effects of these price wars, by the end of the 1870s, railroads accounting for more than 30 percent of domestic mileage had failed or fallen into court-ordered receivership.

Seeking to redress the railroad problem, Congress passed the Interstate Commerce Act in 1887. The resulting Interstate Commerce Commission attacked the twin problems of monopoly and ruinous competition primarily through rate regulation. The ICC mandated that railroads publicly post their rates and that they charge all passengers and shippers roughly the same price per mile for the same level or category of service. This eliminated price discrimination based on sheer market power, thereby reducing regional inequalities and market distortions caused by local railroad monopolies. The ICC also used rate regulation to guard against ruinous competition by setting rates high enough to enable railroads to attract the capital they needed to maintain their infrastructure and finance a wave of new technologies, including much larger, more powerful locomotives and much safer rolling stock made of steel rather than wood. 

Starting in 1916, Congress extended this same basic regulatory framework to maritime transportation by passing the Shipping Act. It created a new agency, the U.S. Shipping Board, which was charged with ensuring that all ocean carriers publicly posted their prices and offered all “similarly situated” shippers roughly equal terms of service. To limit destructive price wars and to advance other public purposes, the Shipping Act and subsequent legislation also limited access to many domestic ports to U.S. flagged ships. Under this regime of regulated competition, the industry was able to finance a dramatic transition from wind-powered to steam-powered ships. 

In 1935, Congress extended the same regulatory model to interstate trucks and buses by giving the ICC jurisdiction over these modes. In this instance the twin challenge of limiting ruinous competition while avoiding monopoly was accomplished through a combination of rate regulation and high regulatory barriers to entry. For example, by limiting the number of commercial interstate truck licenses and the markets that individual truckers could serve, the ICC prevented ruinous competition, thereby ensuring that truckers could earn a living wage, trucking companies could earn their cost of capital, and truck manufacturers could finance the cost of innovation.  

So it is hardly surprising that the United States would wind up applying the same principles to aviation. The story begins in the decade following Charles Lindbergh’s celebrated 1927 solo flight across the Atlantic. During these years, rapid advances in aviation technology promised a revolutionary new age of public air transport if only a viable airline business model could be found.

In 1933, Boeing introduced its model 247, a monoplane capable of hauling 10 passengers at 155 miles per hour. Soon came a series of other revolutionary passenger planes, including the Lockheed Electra and the iconic Douglas DC-3, which could carry 21 passengers at a cruising speed of 180 miles per hour for as far as 1,200 miles. Thanks to these technological advances, by 1935 air traffic had increased to an annual rate of nearly 200 times what it had been in 1926.  

Yet the airline industry was in a state of near economic collapse. The primary reason was the destructive competition that existed among carriers. Starting a new airline required no regulatory approval, and the financial barriers to entry did not extend much beyond the cost of buying a plane and hiring a small crew. As more and more “fly-by-night” carriers flooded into the market, margins became slim or nonexistent. This lack of profitability—worsened, of course, by the effects of the Great Depression—left the industry unable to attract the capital it needed to take full advantage of new technology or even maintain existing planes. By 1938, an estimated 50 percent of all capital invested in commercial aviation had disappeared and the number of airlines offering scheduled service had shrunk from a hundred to less than a score. 

Carriers that survived this era did so mostly by winning lucrative, exclusive federal contracts to haul air mail along specific routes. But the process for the letting of such contracts generated repeated charges of widespread corruption involving collusion between the postmaster general and favored carriers. Responding to the scandals, President Franklin D. Roosevelt canceled all air mail contracts. Confronted with this darkening atmosphere, Roosevelt and Congress began debating how to create a regulatory structure that would enable the aviation industry to at last become economically viable while also serving the public interest. The result was the Civil Aeronautics Act of 1938, which laid out a system for regulated competition in aviation markets that would last into the jet age and beyond. 

For the architects of the Civil Aeronautics Act, a key goal was to stem the deleterious effects of unrestrained airline competition. “We are interested,” noted Senator and future President Harry Truman, “… in seeing that ‘fly-by-night’ operations do not start up and bring down prices and create chaos.”

A House committee report on the proposed legislation charged that excessive airline competition was undermining the government’s previous investments in aviation: “The government cannot allow unrestrained competition by unregulated air carriers to capitalize on and jeopardize the investment which the government has made during the last 10 years in their transport industry through the mail service.” West Virginia Democratic Representative Jennings Randolph went further, noting, “Air transport today is the only mode of transportation and communication for which there exists no comprehensive and permanent system of Federal economic regulation.” He concluded that “unbridled and unregulated competition is a public menace,” citing as examples “rate war[s], cutthroat devices, and destructive and wasteful practices.”

In the eyes of regulation advocates, these and other market failures were unlikely to go away on their own as the industry matured. Instead, they were thought to be built into the cost structure of flying. For example, when an airline operates a plane, it faces fixed costs that must be paid regardless of how many passengers are on board. At the same time, the cost of adding one more passenger to a flight is marginal. This means that an airline can be tempted to sell seats at below the average fare needed to meet expenses if that’s what it takes to fill the plane. Even if an airline can only fill seats by offering some or all passengers money-losing discounts, this practice will at least partially cover the high fixed cost of operating the plane, thereby allowing the airline to lose less money on the flight than it otherwise would. 

To overcome these and other structural sources of destructive fare wars and inequitable pricing policies, Congress settled first on a system of entry control. In the future, to start a new airline or offer service on any route, carriers would need to apply to a new regulatory agency called the Civil Aeronautics Board (CAB) for a certificate of “public convenience and necessity.” 

The meaning of this phrase allowed for differing interpretations. As the aviation icon Amelia Earhart testified before a House committee, “I defy anyone at the present period to define convenience or necessity as applied to aviation. I feel that mere study cannot determine that matter, as we have no background yet of sufficient experimentation to afford adequate interpretation.” 

In practice, however, the standard became clearer. Once the CAB was up and running it grandfathered in most incumbent carriers by granting them regulatory authority to continue operating on their existing routes. It then used a two-step process to control new entry and routes. First it would determine how many carriers a given market could profitably support. If it found sufficient demand to support a new carrier, it would then choose among competing applicants based on their own financial viability and ability to serve the public’s “convenience and necessity.” 

When Americans hear the word regulation they tend to think of the environmental and consumer protection measures put in place mostly since the 1970s. But for a century before that, regulations of a different kind, ones that set market rules of competition, steered the course of the nation’s economic development.

The Civil Aeronautics Act also required CAB regulators to avoid facilitating monopoly power even as they constrained competition. In response to this mandate, the CAB tended to favor smaller over larger carriers in issuing operating authorities. In some cases, the CAB would enhance the economic viability of smaller carriers by allowing them to extend their limited route structures to serve particularly lucrative market segments. To ensure air service to sparsely populated destinations, the CAB sometimes required incumbent carriers to provide the money-losing service using returns on higher-margin routes. This was consistent with Congress’s mandate that the CAB ensure “any citizen of the United States a public right of freedom of transit in air commerce through the navigable air space of the United States.”

The Civil Aeronautics Act also empowered the CAB to mitigate the effects of concentrated market power by using two other policy levers. First, it gave the CAB statutory authority to block any airline mergers or acquisitions that it found to be not in the public interest. More significantly, the act also gave the CAB authority over what rates carriers could charge and mandated that they be fair and reasonable. 

In setting rates, the CAB began by compiling industry cost and revenue figures. Then it calculated what the industry’s cost and revenue would have been if the average flight had been 55 percent full. Fares were then set at a level no higher than what the CAB determined would be sufficient to generate this “revenue requirement” plus a projected 12 percent return on investment for airline stockholders.

Despite huge technological advances, by 1938 the airline industry was near economic collapse because of destructive competition among carriers. Starting a new airline required no regulatory approval, and the barriers to entry did not extend much beyond buying a plane and hiring a small crew.

An important additional provision of the Civil Aeronautics Act required that CAB rate setting promote “adequate, economical, and efficient service by air carriers at reasonable charges, without unjust discriminations, undue preferences or advantages, or unfair or destructive competitive practices.” Mindful of the mandate to prevent “unjust discriminations,” the CAB ensured that the per mile cost of flying was roughly the same on all routes, regardless of distance, destination, or volume of demand. Similarly, the CAB required that all passengers on any plane pay identical fares for the same class of service, thus denying airlines the ability to engage in discriminatory practices commonly used by airlines today, such as charging some passengers more than others depending on when they bought their ticket or on what their destination is when they change planes. Finally, to guard against price wars that might threaten the industry’s overall financial viability, the CAB also generally prohibited any single carrier on a route from lowering fares unless all its competitors did as well. 

During the years that the U.S. was engaged in fighting World War II, civilian airline travel virtually vanished. Yet by 1955, more Americans were already traveling by air than by train, and airliners had replaced ocean liners as the dominant mode of transatlantic travel. 

A huge factor behind the explosive growth in air travel in this era was continuing dramatic advancement in aviation technology, much of it developed by the military and defense contractors during the war years. Aircraft manufacturers introduced a succession of new, increasingly larger, safer, and more efficient four-engine airliners, including the Douglas DC-4, Lockheed’s Constellation, the Douglas DC-6, the Douglas DC-7, and the Boeing 377 Stratocruiser. Starting in the late 1950s, these were followed by a series of still-faster jets offering still-greater capacity and lower operating costs per passenger. By 1968, a single DC-8 could produce more annual seat miles than the entire industry did 30 years before. By 1970, the first “jumbo jet,” the Boeing 747, went into service. 

This technological revolution did not occur, however, independent of the political economy governing aviation during this period. The other huge, and often overlooked, factor was the system of regulated competition overseen by the CAB. Because of CAB market regulation, airlines escaped the self-destructive rate wars and negative margins that had previously prevailed and instead earned consistent, modest rates of return throughout the next three decades. This in turn allowed the aviation sector to attract the capital it needed to develop and deploy rapidly improving but highly expensive new generations of aircraft. 

In short, technology and regulation combined to create a virtuous cycle. Airlines under CAB regulation still faced considerable competition with each other and so were incentivized to invest in faster, safer planes. But because of the carefully balanced limits the CAB placed on competition, aircraft manufacturers in turn understood that if they developed new and better planes, airlines would have both the incentive to buy them and enough capital to afford them. CAB regulation led to airlines investing heavily in more and better planes, and as they did so, the industry grew and became more efficient, allowing more and more Americans to enjoy the benefits of air travel. 

Largely because of this virtuous cycle, the cost of flying fell dramatically under CAB regulation while the safety, speed, and quality of air travel improved even more. Real revenue per passenger mile declined under CAB regulation by an average of 1.8 percent per year between 1946 and 1978. Reflecting this trend line, the inflation-adjusted cost of a typical flight between Los Angeles and Boston fell from $4,439 in 1941 to $915 in 1978, a 79 percent decrease, even as the new generation of jets made the travel time much shorter and the journey much safer. Because of the falling cost and improving value of air travel, by 1977, nearly two-thirds of all Americans over 18 had taken a trip on a plane, up from just one-third in 1962. 

The regulated competition provided by the CAB also allowed the industry to earn enough surplus to support a comparatively well-paid, highly trained, unionized workforce during this era. Another benefit was the promotion of regional equality and more balanced economic development. By requiring high-volume, high-margin routes to effectively cross-subsidize low-volume, low-margin routes, the CAB enabled a larger airline network, thereby creating positive network effects that included allowing businesses in smaller and heartland cities to better compete in larger markets. 

This was key to the flourishing of smaller cities in this era and to the increase in regional equality. The equalization of railroad rates had already allowed heartland cities like St. Louis to compete on equal terms as manufacturing and distribution centers during the first half of the 20th century. By extending the same principle to airlines, heartland cities also became better able to compete nationally and internationally in key service industries. In the early jet age, for example, St. Louis’s abundant air service allowed the city to become a major hub for advertising and public relations firms serving national and global brands, as well as a frequent host of lucrative trade conventions. In no small measure because of their freedom from price discrimination in the transportation sector, heartland cities converged with major money-center coastal cities like New York and Boston in their per capita income during this era, while overall regional inequality fell sharply. 

In summary, under the CAB’s watch, a high-risk venture with vast national economic potential had become by the 1970s a secure and stable backbone of American civic and commercial life. But the regulatory regime had its flaws, and a growing chorus of well-placed critics.

Throughout its existence, the CAB faced criticism from different quarters. An early complaint was that it set airfares too high because it used cost estimates based on flights being little more than half full on average. No doubt the CAB did not always get pricing right. But part of the rationale for this formula was to ensure adequate revenue not just during the top of the business cycle but also during economic downturns. Another consideration was that encouraging higher load factors would erode the quality and reliability of the flying experience. Not only would planes have been more crowded, but there also would have been fewer seats available to rebook passengers whose flights were canceled due to bad weather or mechanical problems. Today’s frequently overbooked planes and lack of capacity to deal with stranded passengers are just two of many ways in which the quality of air travel has declined since the days of the CAB. 

Another line of criticism charged that the CAB, particularly over time, erred in giving incumbent carriers too much protection from start-ups and from each other. Through the mid-1970s, the CAB refused to allow a single new trunkline carrier to enter the business, leading to charges that it had created a protected oligopoly. 

Somewhat inconsistently, other critics charged that airlines were competing too much with each other over the wrong things. The CAB set rates in ways that virtually eliminated price competition. Yet airlines still had to worry about losing passengers to other carriers as well as to other modes like autos or trains for short or intermediate-length trips. So they wound up competing over quality instead of price. This included competition over service standards like on-time performance, safety, leg room, baggage handling, and frequent, convenient scheduling, but they also included, critics charged, competition over “frills,” like inflight meals served on chinaware by comely stewardesses.

Another key argument leveled against the CAB was that it kept airfares unnecessarily high by underestimating the elasticity of demand for air travel. Led by the academic economist turned policy entrepreneur Alfred Kahn, these critics argued that if airlines were allowed to reduce fares to marginal costs and offer deep discounts to price-sensitive consumers, this would fill seats that otherwise would go empty. And with more passengers per plane available to share the fixed cost of flying, Kahn argued, the average cost for everyone could be allowed to fall without endangering airline solvency.

Informing the growing attacks on the CAB was the growing prestige of so-called neoclassical economics. More and more academic economists insisted on the power of unregulated markets to allocate resources to their highest valued use. According to this analytical framework, which became part of the influential law and economics movement championed by Richard A. Posner, virtually any form of market regulation was likely to cause a “dead weight loss” in society’s total welfare. Another influence was the rise of public choice theory, which emphasized how regulation could create barriers to entry that allowed established industries to escape competition and earn “monopoly rents.” 

Because of federal market regulation, airlines escaped self-destructive rate wars and negative margins and instead earned consistent, modest rates of return. This in turn allowed the aviation sector to attract the capital it needed to develop and deploy rapidly improving but highly expensive new generations of aircraft.

Still another line of attack came from the era’s rising consumer movement. The consumer advocate Ralph Nader became a leading champion of airline deregulation, arguing that by creating regulatory barriers to new airlines, the CAB had allowed both airline management and unions to become overpaid and sclerotic at the expense of “the consumer.” Meanwhile many liberals, including U.S. Senator Ted Kennedy and his then Judiciary Committee staffer Stephen Breyer, the future Supreme Court justice, reasoned that deregulation would lead to more competition and lower consumer prices—a high priority at a time when the OPEC oil cartel had set off an inflationary spiral. 

Together, these forces created a unique moment in which both leading Republican and Democratic policy makers, as well as putative academic experts, formed a consensus in favor of ending regulation of airline markets. Implementation of these ideas began when President Jimmy Carter appointed Alfred Kahn to head the CAB in 1977. In that capacity, Kahn took administrative measures that allowed airlines to serve any routes they wanted under a policy known as “multiple permissive entry.” By 1978, the CAB had amended its rate-setting policies to allow airlines downward pricing flexibility of up to 70 percent. When airlines responded with deep fare cuts, the traveling public cheered, helping to build political support for disbanding the CAB altogether. In short order this caused Congress to pass, and Carter to sign, the Airline Deregulation Act of 1978, which ended any government role in managing entry, pricing, and route network structure in airline markets. 

The early years of airline deregulation saw a burst of new entrants along with steep price declines on many high-volume, long-distance routes. But it also brought a sharp decline in air service to cities in what we today call “flyover country.” In 1986, West Virginia Senator Robert Byrd publicly apologized for having voted to abolish the CAB:

This is one Senator who regrets that he voted for airline deregulation. It has penalized States like West Virginia, where many of the airlines pulled out quickly following deregulation and the prices zoomed into the stratosphere—doubled, tripled and, in some instances, quadrupled. So we have poorer air service and much more costly air service than we in West Virginia had prior to deregulation. I admit my error; I confess my unwisdom, and I am truly sorry for having voted for deregulation.

Overall, airline prices did continue to fall on most remaining routes. This caused much of the public, particularly those living in well-served, large coastal cities, to view airline regulation as a success. But with the benefit of hindsight a far different picture emerges. 

First, it’s now clear that the initial reductions in fares on high-volume routes were in large measure not the result of any increase in efficiency, but of a coincidental fall in global energy prices that occurred during the early years of deregulation. A 1990 study by the Economic Policy Institute concluded that, after adjusting for changes in the cost of jet fuel, overall airline fares fell faster in the 10 years before 1978 than they did during the 10 years after. A study published in the Journal of the Transportation Research Forum confirms that this pattern continued for many years. Except for a period after the 9/11 terrorist attack, the study found, real air prices continued to fall more slowly through the mid-2000s than they had before deregulation. Accounting for declines in the quality of airline service over this period would show real prices falling still more slowly. For example, due to the move to a hub-and-spoke system and the decline in the number of direct flights that occurred under deregulation, the time, distance, and inconvenience required to travel to many destinations lengthened substantially during this period.

The initial seeming success of deregulation was further belied during the aughts by the reemergence of ruinous competition, which evaporated airline profit margins and eventually turned major carriers into wards of the state. During all but three years of that decade, U.S. airlines had negative net income, racking up cumulative losses of more than $68 billion. Major airlines like United and US Airways declared bankruptcy and defaulted on their pension debts, requiring expensive taxpayer bailouts. Some of this distress was caused by recessions or spikes in fuel costs, but the larger structural cause was the loss of market regulation. Airlines were no longer assured of adequate margins or protection from price wars. 

Relatedly, the dramatic improvements in aviation technology that had occurred under the CAB disappeared under deregulation. The fuel efficiency of jet engines has marginally increased over the past 40 years, but otherwise, today’s commercial jet liners are little changed from what they were in the late 1970s. In large measure this was a result of deregulated airlines engaging in price wars that prevented them from covering their routine capital costs and accrued liabilities, let alone investing in dramatically improved planes and service quality.

Eventually, in an attempt to reverse their declining economic fortunes under deregulation, airlines not only cut service standards but also consolidated massively, taking advantage of lax antitrust enforcement standards that existed for much of that period. After a series of mega mergers, by 2015, a single airline controlled a majority of the market at 40 of the 100 largest U.S. airports. Through control of gate access at major “fortress hubs,” incumbent airlines faced virtually no competition on most of their routes, or even the threat of it. By 2021, the four largest remaining airlines controlled nearly two-thirds of the entire domestic market. Adding to the cartelization of the industry was the fact that the four remaining major carriers each counted among its largest stockholders the same four large institutional investors, creating an interlocking ownership structure rivaling that of the big colluding corporate trusts of Gilded Age. 

Along with the rise of concentrated ownership came a sharp rise in predatory pricing and other business practices, such as frequent-flyer rewards programs designed to further dampen competition among existing carriers and deter the formation of new airlines. At the same time, the airlines continue to add fees and lower service standards while eliminating flights altogether to many smaller and midsize cities, all while raising fares and engaging in massive stock buybacks. Not only does the traveling public suffer, but so does the promise of aviation itself. Now that they operate as an unregulated oligopoly, airlines once again have the profits needed to invest in new breakthrough technologies but no longer have any incentive to do so. 

The retreat from regulated competition did not just apply to airlines. Market regulation of railroads ended at roughly the same time, for example, and with much the same result. Thousands of American cities lost all rail service or became captive to a single monopolistic railroad, thereby losing their ability to compete as centers of manufacturing or distribution. Similarly, a retreat from regulated competition led to a near-total collapse of American domestic shipbuilding and ocean shipping industries after the 1980s. 

The federal government also retreated from the regulated competition model in many other key realms. These included the gas and oil sectors, electrical generation and distribution, communications, and, above all, finance. 

Until the 1980s, regulated competition of banks and other financial institutions controlled what interest rates they could charge or pay and where they could operate, while also strictly limiting mergers and their investments in adjacent lines of business. State and federal laws fostered a dense web of small-scale community banks and locally operated thrifts and credit unions. This not only prevented the growth of banks that were “too big to fail,” it also prohibited an increase in giant, unregulated hedge funds and private equity firms. Under this policy regime, the role of Wall Street over industrial firms was tightly contained, allowing the management of great American industrial companies like Boeing, General Motors, and General Electric to remain mostly in the hands of engineers committed to innovation rather than financiers intent on stripping out assets to maximize short-term profits. 

Yet the historical role of regulated competition in building the U.S. industrial economy is so poorly understood that the very phrase seems like a contradiction in terms to many Americans. Under the thrall of “neoliberal” doctrines popularized over the past 40 years, too many of us are conditioned to view “free market” competition as inherently productive, and to view government intervention in markets as inherently the opposite. But history shows that without smart rules, market competition is often ruinous to the firms involved, to the pace of technological progress, and ultimately to the larger public interest. Fortunately, history also shows that government measures that channel competition toward productive and socially useful ends are not only possible; they were fundamental to the creation of the world’s greatest, most innovative, most broadly prosperous capitalist nation.

The historical role of regulated competition in building the U.S. industrial economy is so poorly understood that the phrase sounds like a contradiction. Too many of us are conditioned to view “free market” competition as inherently productive, and to view government intervention as inherently the opposite.

Can we ever get back to a system of regulated competition? At a time when the Trump administration is slashing what remains of America’s regulatory state and “abundance liberals” are bemoaning excessive red tape as a key obstacle to prosperity, reestablishing the system of market regulation might seem like a political nonstarter. But we have already seen dramatic movement in both parties over the past decade in their rediscovery of the importance of effective antitrust enforcement. This has occurred not because of any great change in their respective ideologies. It has happened because of the sheer accumulation of documented harms caused by unregulated, predatory monopolies and a growing awareness among policy intellectuals and elected officials that century-old antitrust laws could be revived and repurposed to address new conditions. 

Similarly, it’s probably only a matter of time before wide-scale downward mobility, particularly the decline of heartland communities in key electoral states, leads policy makers to rediscover the merits of regulated competition. It is not hard to imagine, for instance, politicians from both parties who represent “flyover” cities banding together to demand more equitable, affordable, and practical air service through a new regulatory regime—perhaps one that takes advantage of AI and other modern data-processing tools to make pricing and other regulatory functions more precise than under the old CAB system. 

Taking the next step back to the future by applying market regulation to key existing and emerging industries might take time. But it seems increasingly inevitable as the alternatives variously advocated by Republicans and Democrats continue to fail or prove inadequate to the political economic challenges we face.

The post The Secret to Reindustrializing America Is Not Tax Cuts and Tariffs. It’s Regulated Competition. appeared first on Washington Monthly.

]]>
159225
Resurrecting the Rebel Alliance https://washingtonmonthly.com/2025/06/01/resurrecting-the-rebel-alliance/ Sun, 01 Jun 2025 22:57:00 +0000 https://washingtonmonthly.com/?p=159229

To end the age of Trump, Democrats must relearn the language and levers of power.

The post Resurrecting the Rebel Alliance appeared first on Washington Monthly.

]]>

When I was a boy in Miami, we didn’t eat strawberries. Every morning I’d stare at the images on the box of Cheerios and wonder why there were no little red fruits in my own bowl. It was only later that my dad told me of his childhood. Starting at age six—after my grandparents lost their land to the bank—he’d spend 14-hour days picking berries on a sharecrop farm in Plant City, Florida. “I know how much blood,” he said, “is in each basket.” 

For most of my time at home, my dad sold insurance and my mom worked as a secretary. We lived paycheck to paycheck. But my parents saved enough for clean clothes, two-week vacations, even a new car once. Then my mom got sick and my dad lost his job and we were pretty poor. 

Still, I felt lucky. My parents were loving and supportive. Many of my friends at school had it a lot harder. Our district ran along the Snake Creek drainage canal from Norland to Carol City, then down 27th Avenue to Liberty City. Our curriculum included studying the effects of quaaludes, motorcycle crashes, drive-bys, and various forms of child abuse. 

Then there were the lessons I learned working in a factory and an industrial bakery and on the sandwich line at Burger King. All of which—as long as the boss wasn’t too much in my face—seemed preferable to helping my dad earn cash by mowing lawns and replacing tarpaper roofs. Of all my tutors, none was ever sterner than the Miami sun at midday in July. 

In the factory towns of Michigan and farms of Iowa, in the warehouses of Harrisburg and hospitals of Fresno, along the borderlands of Texas and floodplains of Missouri, most folks think about power every day. It’s not that any of us regular folk imagines having any real power. What we do think about is freedom from power. In communities like ours, economics is simple. There’s no escape from hard work. But there’s also no reason ever to be bullied and belittled by any petty foreman or distant corporate board. And that means working together to find ways to live free from any form of capricious control.

It’s in these conversations about liberty that we hear the original political economic language of America, dating to long before the founding. Most immediately, it might be the simple solidarity of letting someone sleep on the sofa so they can quit their bad job. Over time it evolves into the language of mutual protection. Sometimes the bottom-up protection of the union and guild and trade association. Sometimes the top-down protection of using the public government to limit the power of corporations and capitalists. 

Material well-being is not the only or even main subject of this language. It’s also a language of sharing out opportunity. Of breaking the barriers to making a larger human community. Of inclusion—even eccentricity and weirdness—in its inherent intent to disrupt the homogenization of uniform and cubicle. It’s a language of human dignity. Of finding one’s own purpose. Of freeing ourselves to dream. 

When I was in school, this language was largely still the language of America. We heard it in the music of Sly Stone and Johnny Cash and, a few years later, the L.A. punk band X. We also heard it in almost every utterance of Lyndon B. Johnson and Barbara Jordan and Walkin’ Lawton Chiles. And for the most part, the Democrats of that day delivered, in support for the right to organize, in protection from the chain store and the Wall Street bank, in affordable mortgages for the family home. In an entire political economy geared to empower the worker, independent business owner, and farmer—as well as the entrepreneur, innovator, engineer, scientist.

In the 1980s, the Reagan administration began to promote a new political economic language. They said the old fight for liberty from power made the economy inefficient. They said if we let corporations concentrate control over production and retail, they could build more things and sell them more cheaply. Their new language was technocratic, mathematical. It was designed, they said, to help illuminate the mechanical operations of the “market forces” that drove efficiency. 

Soon, Democrats began to follow. A new generation of leaders spoke less of sharing out power and responsibility and more of how to “deregulate” business to make—and in theory, share out—more stuff. They also embraced the idea that economics was metaphysical in nature, and lectured us on the new forces—“globalization,” digital technologies—that restricted our ability to shape our own lives.

Today we face the gravest set of threats to American democracy, and our most fundamental liberties, since the Civil War. We see this threat in the rise of a new class of oligarchs, who increasingly have the power to determine how we work, what we read and watch, how we make community, how we do business with one another, what technologies we must use and which are crushed. We see it even more clearly in President Donald Trump’s harnessing of the powers of these oligarchs for his own private purposes. 

These threats are a straight-line result of the Democratic Party’s abandonment of America’s original system of liberty. If we are honest, we will admit that Trump sits in the White House thanks largely to the rage of citizens Democrats betrayed. He raises mobs using systems of communications Democrats led the way in corrupting. He leverages the power of private autocrats that Democrats led the way in empowering. 

The task ahead for Democrats is not merely to resist and slow the predations and destructions of President Trump. It is not merely to knock the Republicans out of power in 2026 and 2028. It is to establish a new political economic regime which ensures that our liberty and prosperity are never again threatened by any homegrown oligarch or autocrat. And Democrats must do so in a world filled with great enemies, eager to exploit the chaos sown here in America by Trump and the oligarchs, to topple us.

None of this will be possible until Democrats first fully recover America’s original language of liberty. Doing so is the only way to relearn the wisdom about power and political economic structure baked into this language. It’s the only way for Democrats to convince the American people they actually understand how to make their lives better, and have the courage to act. And the only way Democratic elites can prove they understand their own responsibility for today’s crisis, and fully grasp the threats to their own lives and the lives of their own children.

The origins of what we understand today as liberalism trace to the early 17th century. It is both a set of assumptions about the nature of the individual—that every person has an equal capacity to do good in this world, and to earn entry to the next—and a set of political and economic rules designed to protect all the forms of liberty such an individual might desire, be they political, commercial, or spiritual. People first began to establish and formalize these rules in fights against the absolute monarchs of that time, especially Charles I in Britain. Liberalism, in short, is a system of political and economic rules designed to defend and expand individual liberty.

Technically, much of this new rule of law focused on protecting the properties of the individual. People understood that if the king could seize one’s land or business at will, then that person would tend to do whatever the king demanded. In practice, these efforts largely played out in the establishment of anti-monopoly laws—to prevent the king from concentrating power through either the granting of monopoly to a particular favorite or arbitrary threats to take an opponent’s property away. Not that this system of liberty served mainly the interests of the wealthy. Early liberals made sure also to protect the rights of tradespeople to use their skills, and of farmers to bring produce to market, and of authors and dramatists to copyright their work.

Further, people without any real wealth or other forms of social power forced their way into the debates about both the nature and structure of human liberty. A main path for such early democratic liberalism were the vibrant discussions in the Protestant Churches in Britain. In contrast to the hierarchies and mysteries of the established Church, Puritans preached an equality of all souls before God, which in turn led to visions of greater equality in this world. As the historian William Haller put it, “This spiritual equalitarianism, implicit in every word the preachers spoke … became the central force of revolutionary Puritanism.” 

We must honestly admit the radical nature and full immensity of the political threat we face, which is the direct merger of private monopoly and the state. And our own complicity in creating this crisis.

The English Revolution was the first truly modern war for individual liberty. The victory was achieved largely by the New Model Army, a professional force led by the Puritan Oliver Cromwell and composed largely of Protestant dissenters. We remember the revolution today mainly because it culminated in the beheading of Charles I in 1649 and a period of brutal oppression in Ireland. But its importance for us lies in the fact that while in the field, various groups of soldiers—animated by the “democratic rhetoric of the spirit”—formulated visions of popular democracy that profoundly shaped thinking in America a century later. What they invented, the historian Edmund Morgan wrote, was nothing less than a new conception of a “sovereign people” whose authority rests on “the rights of men.” 

In the end, the Commonwealth of England could not hold. After the death of Cromwell, Parliament and the army disintegrated, and in 1660 a new parliament arranged for Charles I’s son to be crowned king.

But in America, memories of the people’s commonwealth lived on. A key actor was Samuel Adams, born to an ardent Puritan family that later went bankrupt. At Harvard, Adams studied the writings of John Locke. But it was on the streets of Boston in 1747 that he first witnessed the power of direct democratic action, as he watched African, English, Dutch, and American sailors lead a fight against impressment. Adams responded by founding a newspaper and expressing a new vision of democratic liberty, based on a belief in universal equality. “All Men are by Nature on a Level; born with an equal Share of Freedom, and endow’d with Capacities nearly alike,” he wrote. Thomas Jefferson later recognized Adams as “the earliest, most active, and persevering man of the Revolution” and the “patriarch of liberty.” 

In any discussion of the founding, it is vital to fully recognize the repugnant nature of the compromises made to the slaveholders. But if we focus only on the snakes who extorted a Big House in the sun, we miss the work of all the regular folks who first plotted out that garden, then cut the trees and tilled the land. This includes in 1789, when Adams and Patrick Henry helped lead efforts to force the Framers of the Constitution to append a clear statement of common equal liberty in the form of a bill of rights simple enough for every person to understand. 

Over the next 70 years, the belief in human equality provided the moral leverage necessary to make good on the promises of the Declaration of Independence, first through abolitionism and then civil war.

Today in America, the chain of command is simple. The oligarchs boss us. And Trump bosses the oligarchs, including sometimes by instructing them how to direct us. In short, the restoration of monarchical autocracy.

The political leverage came from the anti-monopoly systems Americans had built into the Constitution, and the anti-monopoly provisions of the common law inherited from Britain. The idea that the early United States was an unregulated libertarian utopia is a modern, dangerous myth. In fact, Americans from the first used their local, state, and federal governments both to protect themselves from private corporate power and to work collectively to create and distribute new forms of wealth. They established simple bright-line market structure rules to protect the independence of the individual and the family, and to limit the size, structure, and behavior of the corporation, usually through direct legislative charter. And they aggressively used government power to distribute land and education to those with little or nothing.

They also carefully updated these rules to apply to the railroad, telegraph, telephone, and other powerful new network technologies. They focused less on limiting the size and more on regulating the behavior of these powerful new network technologies. They aimed to prevent the people who controlled these corporations from extorting wealth and power from users, by requiring the corporations to provide the same service at the same price to all individuals and businesses.

One result was a political economy designed to promote the dignity of the individual, as well as the constructive engagement of the citizen within the political economy. Another was a fantastic explosion of material prosperity that made the average American much richer than their peers in almost any other nation.

Over these past 250 years, private autocrats twice overturned this American system of liberty. The first time came in 1877 when the “corrupt bargain” that made Rutherford B. Hayes president not only ended Reconstruction but also helped clear the way for lifting almost all traditional regulation on the corporation. This led to the rise of the vast all-powerful private monopolies of the plutocratic age and resulted, in the words of W. E. B. Du Bois, in an “Empire of Industry” that “assumed [a] monarchical power such as enthroned the Caesars.” 

Although Congress passed many vital anti-monopoly laws during this period—such as the Interstate Commerce and Sherman Antitrust Acts—it was not until the election of 1912 that Congress managed to reestablish a regulatory system fully able to protect liberal democracy from oligarchy, with passage of the Clayton Act, Federal Trade Commission Act, Federal Reserve Act, and a constitutional amendment enshrining the progressive income tax. This system, named the “New Freedom” by Woodrow Wilson, provided the foundation for the Second New Deal.

The second reaction against the democratic republic was launched in 1981 with Ronald Reagan’s suspension of anti-monopoly law. In 1993, President Bill Clinton then carried this anti-democratic libertarian philosophy to regulation of banking, finance, energy, media, telecommunications, trade, and the internet. 

In the years since, the consolidation of power and control has taken place in two broad stages. During the first, which we might call the age of Walmart, we saw the consolidation and offshoring of factories, the mass destruction of small businesses and family farms, the concentration of control in finance, the rise of Big Pharma and Big Hospital, and a slow-motion takeover of America’s housing and rental markets.

During the second, which we might call the age of Google, we saw a few vast information platforms consolidate control over online communications, commerce, debate, publishing, entertainment, and computing power, to a point where they have power to manipulate the thinking and actions both of the individual and society as a whole.

The result of these twin blows against America’s traditional system of liberty has been a collapse of the rule of law not merely in the political realm, but especially in the economic. Today in America, the chain of command is simple. The oligarchs boss us. And Trump bosses the oligarchs, including sometimes by instructing them how to direct us. In short, the restoration of monarchical autocracy, but this time amplified by surveillance tools vastly more powerful than even Joseph Stalin dared imagine.

It was not until college that I first realized the language I’d learned in Miami was not universal. At Columbia on scholarship I found a society of young people who’d never been forced to bend themselves to some job just to pay the rent. Their language was of fraternities and business mixers and parents with friends on Wall Street or in Washington or “the arts.” At graduation, they went to entry-level gigs at Salomon Brothers or Time Inc. or the State Department, or unpaid internships in film and theater.

I had long wanted to see the world. But this took money, and so I went back to the work I knew. I drove trucks across the country and dug ditches, pushed on construction sites and hauled office furniture. I learned what it’s like to not be able to scrub the smell of work off your body, how slowly a clock moves when you stand 10 hours on a line, how hard it is to sleep after you’ve picked hard rock all day. 

In time I learned how to pay for my travels through journalism, which led to six years as a correspondent in South America and the Caribbean. In addition to Maoist guerrilla war, cholera, and violent elections, I also had the opportunity to study up close two of the most brutal austerity “shock” programs of the Washington Consensus era of cartelized capital. The first, in Venezuela, led to a weeklong insurrection in Caracas, the death of more than 1,000 people, and ultimately the collapse of Latin America’s strongest democracy. The second, in Peru, did cure hyperinflation, but also left millions hungry.

Eventually I took a job running a Washington-based magazine named Global Business. Here again my timing—in terms of getting to learn how America’s new monopoly capitalism actually worked—was perfect. I started just as the Clinton administration was implementing NAFTA and finishing negotiations to create the World Trade Organization, and our circulation soon rocketed as C-suite executives at thousands of manufacturers scrambled to adapt their businesses to this radically new environment of law. We spent our days writing articles that helped corporations decide what to outsource, where to offshore, how to engineer a supply chain. I traveled to China, Singapore, South Korea, Russia, Hong Kong, and across Europe and the Middle East reporting on the revolutionary restructuring of every major industrial system in the world. 

It was thanks to this work that I immediately understood the implications of a massive earthquake in Taiwan in September 1999. That event, in turn, is how I came to understand that the American elite’s obtuseness toward the threats posed by concentrated power actually matters in the real world.

The problem triggered by the quake was easy to understand. A few years before, almost every type of industrial capacity was broadly distributed across many countries. Many factories, for instance, made wiper blades and alternators. Much the same was true for high-end items like semiconductors, as more than a dozen vertically integrated manufacturers in the United States, Europe, Japan, and South Korea competed to make roughly interchangeable products. But the radical changes in policy by Reagan and Clinton had undone this balance, first by clearing the way for corporations to concentrate control and capacity in the U.S., then by clearing the way to concentrate control and capacity at some single point in the world.

What the earthquake taught was that over the previous few years Taiwan had managed to lock up a huge portion of the capacity to make high-end semiconductors. When the quake then disrupted the ability to make and transport these chips, the result was a cascading worldwide industrial crash that within a few days shuttered factories in California, Texas, Germany, Japan, and elsewhere.

As it proved, we were lucky. The quake had not damaged the foundries, but simply disrupted just-in-time shipment of the chips. Within two weeks most factories were back online. But the quake also made clear that concentration of industrial capacity had reached a point where it was easy to imagine a far more devastating shutdown, triggered for instance by slippage of a fault line closer to the foundries. Or war, blockade, embargo, or some other disruptive political act. Or, say, a pandemic.

Over the next few years, I was among the first to describe the extreme concentration of capacity the quake had revealed, and to detail some of the potentially existential threats posed by such chokepointing of vital production. My work was anything but theoretical. It was based on conversations with hundreds of CEOs, vice presidents of manufacturing and logistics, engineers, reinsurers, and others—almost all of whom were eager to confirm my reporting. 

These managers and engineers also made clear that the problem was easy enough to fix. Unlike a pool of oil or vein of metal, we can locate machines almost anywhere. We can take all of a certain type of machine and put them in one place, or divide them among four or 40 places. What executives needed, they said, was for government to reestablish fair rules obligating all manufacturers engaged in a particular business to distribute their risk. Time and again, these engineers cited the same antique adage—never put all your eggs in one basket.

During these years, I met often with officials at top levels at Treasury, Commerce, the Pentagon, the CIA, and the Federal Reserve, as well as the White House, and it was here that I began to see a pattern. Time and again, the economists in the room would challenge my reporting. They assured me that what the managers and engineers said could not be entirely true. Such an extreme concentration of capacity, without any backup plan, violated too many core theories. Clearly the executives must be after some handout, some sort of subsidy or protection. I heard this not merely from staff economists, but from men who had won or would soon win Nobels. 

I eventually concluded that many economists educated in the post-Reagan libertarian era were simply unable to see or understand certain forms of systemic risk. They had never learned how to use law to engineer resiliency. If anything, in their fixation on efficiency, they had begun to celebrate—rather than condemn—brittleness and fragility. 

There are many flaws in the French economist Thomas Piketty’s analysis about the origins of inequality. But in his book Capital in the Twenty-First Century, he does cut to the heart of the problem posed by his U.S. counterparts. Despite their “absurd” claims to “scientific legitimacy,” Piketty writes, in actual fact “they know almost nothing about anything.” 

The idea that the early United States was an unregulated libertarian utopia is a modern myth. In fact, Americans used government both to protect themselves from private corporate power and to create new wealth.

It was in these discussions that I also came to understand there was another factor, besides the flawed theories of the economists, that further reinforced this blindness to the extreme concentration of risk in complex systems. This was the new deterministic thinking that libertarians had been pushing into U.S. policy making since the early years of Reagan. 

When the Clinton officials in the early 1990s first began to lecture Americans on how “globalization” and the digital revolution were forcefully restricting our ability to shape our economy here at home, their goal was to redirect political debate away from actions, such as stripping factories from Ohio and moving them to Chongqing, that voters would oppose. In claiming that those actions were being dictated by forces of nature, they were saying something they knew—or should have known—to be untrue.

Over time, however, I realized that more and more policy makers and journalists were beginning to actually believe in these metaphysical forces, sometimes almost religiously. I found such beliefs to be especially strong among Democrats and progressives. Historically, progressives tend to learn such deterministic thinking from people influenced by Karl Marx, such as the economist Joseph Schumpeter. In contemporary debate, the most influential early source for such thinking was Robert Reich’s 1991 book, The Work of Nations. Probably the single gaudiest distillation was Tom Friedman’s The World Is Flat, from 2005, which he proudly described as a “technological determinist” vision of human thought and action.

Although in 2015 Reich repented of his earlier teachings, the damage had long since been done. The combination of bad science and weird metaphysics had helped foster a broad blindness—a collective incognizance—among an entire generation of progressives to the ways that extreme concentration had destabilized many if not most of the complex systems on which we depend today.

When I published Cornered in early 2010, my main aim was to help people see the new concentrations of power and control and understand all the ways this threatened their liberty, economic well-being, and safety and security. My main means was to resurrect the root language of American democracy, the language of power and structure and community I had learned growing up. Often this was as simple as replacing the word consumer with citizen, or the word welfare with liberty, or the word global with international.

Over the next few years, from a base at the New America think tank in Washington, we built a network of people able to see some key piece of America’s monopoly problem, and created opportunities to learn from one another’s work. We published many of the ideas we developed during those days here in the Washington Monthly

For more than decade now, we have also understood that in recovering and renewing America’s anti-monopoly system we were amending the motto that had guided the Democratic Party since the 1990s. Victory, from now on, would mean recognizing that “It’s the POLITICAL economy, stupid.”

In recent years, we have enjoyed phenomenal success. In 2016, Massachusetts Senator Elizabeth Warren thrust our message into public debate in a speech presenting a radically fresh analysis of the threats posed by concentration of ownership and control. In 2019 and 2020, groundbreaking hearings by the House antitrust subcommittee, chaired by Rhode Island Democrat David Cicilline, mapped America’s biggest monopoly threats and lighted the way for powerful antitrust lawsuits against Google and Facebook in 2020. Then President Joe Biden formally embraced our thinking in his executive order on competition in July 2021, and hired a new generation of law enforcers to carry the thinking into practice. 

That was just to start. The Biden White House also used competition philosophy to shape new visions for governing international trade, restructuring industrial systems, and protecting free speech and a free press. They also boosted innovation, protected independent businesses and farms, made it easier for working people to organize, and made it harder for monopolists to inflate prices. 

What these Democrats achieved was nothing less than the sweeping away of an extreme right-wing anti-democratic philosophy, and the first stages in the restoration of America’s true liberal centrist tradition based on pragmatic regulation of corporate power and behavior. These actions reinforced true rule of law, empowered citizens to shape their own futures, and made the world more secure and peaceful.

Come 2024, Democrats had everything they needed to win any debate about power, with Donald Trump or any other Republican. The one obstacle? Much of the aging mainstream of party functionaries and elite press sang from the old Clinton hymnal. In part, this was simply a matter of money; or more accurately, of a desire by people like Senator Chuck Schumer not to chase away some of the big donors who opposed Biden’s populist policies. But it was also—in some ways mainly—a function of intellectual inertia. Of the fact that so many liberals and progressives, convinced of their own necessary moral righteousness and superior erudition, never troubled to free themselves from the ideological shackles of the libertarian revolution of the 1980s and ’90s, with its fetishization of efficiency and pre-Enlightenment metaphysics. 

Reformers tend to blame political cowardice on cupidity and corruption. What I’ve learned over the past 25 years is that fatuousness, especially when combined with lack of imagination, often plays a much bigger role. 

Consider, for instance, the failure of these same Democratic elites to understand—let alone respond to—the threats posed to their own businesses, hence their own positions within society, by Google, Facebook, Amazon, and TikTok. As these corporations rolled up illegal control over advertising, the distribution of news, movies, television, and music, and the social media and email systems politicians use to speak to voters, Democratic-leaning publishers, journalists, and policy makers failed almost entirely to take coherent actions to protect themselves. 

When threatened by the rise of a new technology, every previous generation of Americans—of whatever party—would have immediately begun to use law and other policy tools to protect the democratic foundations of a free press, free speech, and free debate. Yet this last generation of liberals basically acted as if the concentration of control over these activities—and over the businesses they owned and market systems on which they depended—was a natural, inescapable, immutable function of the forces that power technological “evolution.” By contrast, this last generation of Republicans worked avidly for years to build an entire integrated complex of news publishers and communications platforms into a massive propaganda machine designed to shape thinking, action, and voting across the entire nation and political spectrum. (More recently, the Justice Department’s antitrust victories over Google’s monopolies in advertising technology and search offer a huge opportunity for all independent publishers to begin to build next-generation advertising-supported businesses.) 

And yet still, even with all the advantages this information machine conferred on Trump and the Republicans, Kamala Harris could have won comfortably last November. As Rana Foroohar detailed on these pages in October 2023, the Biden-Harris administration could have presented a powerful story of political economic transformation, including a sophisticated strategy to break the ability of monopolists to extort America’s families. 

If there’s a single emblem of why Democrats lost, it was Harris’s repeated refusal during the campaign to own Lina Khan and her team’s work at the FTC. When Reid Hoffman in July called on Harris to promise to fire Khan, the tech mogul provided the campaign an almost perfect invitation to demonstrate that their candidate understood the nature of private corporate power today, and had the strength of character to fight that power. Here was an opportunity to list all the successes of the Biden-Harris team in lowering prices, raising wages, and ensuring freedom in the digital economy. Here was an opportunity to identify a few villains Biden had missed but Harris would now target for action.

Instead, the campaign treated Khan—and Biden’s entire brave political economic team—like bastard children. And they continued to do so even as J. D. Vance and Steve Bannon happily embraced Khan and her policies. 

And so, during the final stage of the campaign, as Trump paraded oligarchs on leashes through the ballrooms of Mar-a-Lago, the apparatbrats of the Democratic Party tutored Harris on how to kiss the Lanvin low-top.

Let’s make sure we pull the right lessons.

Yes, Democratic Party elites’ failure to recognize the continuing bite of inflation played a big role in Harris’s loss. But the Democrats’ inability to speak honestly about the threats posed by concentrated power left much more than prices unaddressed.

The task ahead for Democrats is not merely to resist Trump. It is to establish a new political economic regime which ensures that our liberty and prosperity are never again threatened.

Voters also want a party which recognizes that true democracy is not simply a matter of having your vote counted. They want a party that will protect their rights as workers, by addressing the soaring imbalances of power between the corporation and employee. They want a party that will protect their rights not to be manipulated and exploited in their day-to-day lives, by tech corporations that circle their every act and thought. People also want a party that will recognize the revolutionary upheaval in their homes as social media corporations reach into the souls of their children and spouses and brothers—addicting them to porn, gambling, gaming, and crypto—or who throw open the door to vicious bullying by everyone from nasty schoolmates to corporations selling weight loss drugs.

And people want meaning. To feel, if only for a moment each day, that they are part of some common struggle.

In Trump’s sneer, in watching him force the oligarchs to kneel, many Americans see their own rage about all these indignities, their own search for justice gratified, at least in the form of punishment. In the 2024 election, the Democratic Party never delivered a believable promise to fight for true equality of opportunity, responsibility, and dignity. Nor did Trump. What he did do was promise to drag Mark Zuckerberg, Jeff Bezos, and Sundar Pichai into the same stinking pit of mud with the rest of us.

When voters turned to the Democratic Party, by contrast, they heard the treacly language of charity—of condescension—delivered in the tones of a courtier class that itself stands on unfirm ground.

For four centuries, the vernacular of popular democracy taught that we all walk the same path to salvation and enlightenment. It is a language that balances the universal and eternal that binds all humans together, with recognition of the absolute glory of each and every individual quester and dreamer. 

Yet in 2024 as these pilgrims came to the Democratic Party’s door—and in the America of the 21st century, every human being is still in some way a pilgrim making their own particular progress—Democrats offered naught but a sack of pebbles and twigs that we called “policies.”

Our minds spin. We can almost feel his finger on our chest, his spittle in our face, as with twinkling eye he jackhammers our entire world of universities and law firms and goo-goo government offices, even the Kennedy Center. As his droogs perform “a little of the old ultraviolence” seemingly right in our living rooms, we sit in our mid-century loveseats, hands folded, waiting for it all to stop. Or, like that scourge of tyrants Tim Snyder, we book flights for Toronto, or flip through listings of pieds-à-terre in the Marais. 

Ain’t that right, Mr. Jones

Since the election, Democrats have been presented with three options for retaking power. The first, courtesy of James Carville, is to play possum till the hillbillies miss us. Second, championed by Bernie Sanders and Alexandria Ocasio-Cortez, is to oppose everything Trump does, everywhere, all at once. Third is to cozy up to good oligarchs, so they can shelter us until the MAGA storm blows over. This thanks to Ezra Klein and the “abundance movement.” 

The better path is to honestly admit the radical nature and full immensity of the political threat we face, which is the direct merger of the power of the private monopoly and the state. And our own complicity in creating this crisis. And all the ways the old libertarian thinking continues to lead us back into darkness, superstition, and savagery. 

And then we should set about finishing the job the true liberal democrats of the Democratic Party began a decade ago—of fully restoring the traditional system of liberty that smarter generations than ours designed precisely to protect us against oligarchy and autocracy. 

Simply assuming that Trump will fail is foolish; weeks from now we may sit marveling as he pulls rabbits out of the sewer in the form of peace in Ukraine and a trade deal with China. Targeting Trump only—as if he were the sole source of today’s crisis—is a good way to lose our democracy forever. Trump is a true autocrat, vicious and violent. But he is our child, birthed of our own barbaric destruction of the rules liberal democrats designed over the course of hundreds of years to master the power of private corporation and state. Trump bestrides oligarchs we created.

For now, these oligarchs fear Trump. For now, they lie quiet in the deep grass. But they know Trump will stumble, later if not sooner. And so they slowly coil themselves to strike, to make our garden forever theirs.

Behold, with open eye, the absolute disdain for all the old constraints that grows in the hearts of today’s big men, not just Elon Musk and Mark Zuckerberg, but Peter Thiel and Larry Ellison. Not only among the junta at Google, but even behind the smiling miens of Satya Nadella and Brad Smith at Microsoft, who in recent months unleashed a savage attack on democratic regulation of corporations in the UK, as they practice for bigger game. Smell their lust for control.

Their existing power over information already threatens a type of authoritarianism almost impossible for us to fully imagine. As Ellison has made clear, his aim is nothing less than an AI-powered surveillance state. Given such plans, any future American president who lacks a coherent strategy to break the oligarchs’ power will end up as little more than their enforcer, or pet.

The other threat, intimately related to that of top-down autocracy, is of chaos, collapse, and war. Every day the danger of cataclysm increases, as Trump in his Lear-like rage breaks the systems on which we rely for peace. As the oligarchs impose on society AI-amplified systems of control that lack any capacity to manage true human complexity, even as the oligarchs themselves create dangerous new chokepoints. It is a chaos that is already creating glittery new temptations for adventurism, perhaps right back at the original fault line of today’s world, Taiwan.

There is one way only to rebuild democracy, true prosperity, lasting security, and peaceful cooperation among nations. This is to break the power of the oligarchs and the system that created them. To join the people in their war to restore their liberty to use simple human commonsense tools to govern.

Today we enjoy the greatest opportunity since the New Freedom and the New Deal to frame a political economic system that truly works for every American. And given that the goal of such a system is to foster the independence, dignity, and confidence of each individual, perhaps we might even find it possible to build a new foundation for moral progress. It’s a prospect we should find exhilarating. 

So feel the cinder block wall against your back. Know with your skin you have nowhere to retreat. Relearn, thus, how to stand and fight. Relearn, thus, how to use the tools the people fashioned—over the course of four centuries—to keep you and your children free and safe. Relearn, thus, how to be a full part of a true American democratic community of equals, based on the common light in each of us.

The post Resurrecting the Rebel Alliance appeared first on Washington Monthly.

]]>
159229
God and Man at Sea https://washingtonmonthly.com/2025/06/01/god-and-man-at-sea/ Sun, 01 Jun 2025 22:35:00 +0000 https://washingtonmonthly.com/?p=159217

William F. Buckley Jr. spent a lifetime trying to make a coherent intellectual case for conservatism but could never articulate what it was supposed to consist of apart from owning the libs.

The post God and Man at Sea appeared first on Washington Monthly.

]]>

In 1949, the historian Arthur Schlesinger Jr. wrote a book defending liberal democracy. It was called The Vital Center. In it, he asked, “Why has American conservatism been so rarely marked by stability or political responsibility?”

Buckley: The Life and the Revolution That Changed America by Sam Tanenhaus Random House, 1,040 pp.

Two years later, Regnery Publishing released a slender book by a self-proclaimed “radical conservative” that almost seemed designed to confirm Schlesinger’s misgivings. It was called God and Man at Yale. The book appeared as Yale celebrated its 250th anniversary, and its depiction of the campus as a hotbed of “atheism” and “collectivism” created a furor. Writing in The Atlantic Monthly, for example, the Yale alumnus and Harvard professor McGeorge Bundy dismissed its author as a “twisted and ignorant young man.” 

The young man was delighted. In New Haven, William F. Buckley Jr.’s rhetorical prowess had made him what his classmate Gaddis Smith, later a Yale historian, deemed “almost a God-like figure.” Now his best-selling book not only catapulted him to national fame—Time called him a “rebel in reverse”—but also spawned a new and lucrative anti–Ivy League genre, from Allan Bloom’s The Closing of the American Mind (1987), Roger Kimball’s Tenured Radicals (1990), and Dinesh D’Souza’s Illiberal Education (1991), all the way up to Christopher Rufo’s America’s Cultural Revolution (2023). Buckley’s influence on generations of conservatives can hardly be overstated. He was the St. Paul of the movement, a tireless proselytizer for the true faith. His activities—he wrote thousands of newspaper columns, published dozens of books and novels, worked as a CIA agent, founded the National Review, debated James Baldwin and Gore Vidal, championed the political fortunes of Joseph McCarthy, Barry Goldwater, and Ronald Reagan, befriended leading liberals such as John Kenneth Galbraith and Murray Kempton, and presided genially, if often lethally, over the popular television show Firing Line—were legion. What to make of it all?

Enter Sam Tanenhaus. In a grand biography, Buckley: The Life and the Revolution That Changed America, Tanenhaus closely traces Buckley’s remarkable odyssey from a cossetted Catholic childhood to titular leader of the conservative movement. Tanenhaus is a former editor of The New York Times Book Review and the author of a widely hailed biography of Whittaker Chambers that appeared in 1997. Tanenhaus explains that even before completing his study of Chambers, a former Soviet agent turned anti-communist who testified against the State Department official Alger Hiss in a famous spy case in 1948, it occurred to him that the obvious bookend to his life was a work about Buckley’s. Though this is not an authorized biography, Tanenhaus, who spent several decades researching Buckley’s life, received his full cooperation, conducting numerous interviews with him and his associates as well as performing prodigies of research in Buckley’s private papers. 

Like not a few conservatives, Buckley idolized Chambers. But Buckley never heeded Chambers’s warning that his hero Senator Joseph McCarthy—“a raven of disaster”—would bring discredit on the fledgling conservative movement. Tanenhaus skillfully excavates much revealing material about Buckley’s political stances—toward McCarthy, the civil rights movement, gay rights, and South Africa—to highlight that he often spectacularly misfired in his judgments. Above all, he shows that Buckley, who repeatedly attempted to make a coherent intellectual case for conservatism, could never articulate what it was supposed to consist of apart from owning the libs. Though Tanenhaus does not himself explicitly seek to draw a line from the conservative past to the present, his chronicle suggests that radicalism has been the political right’s natural habitat.

Buckley, who was born in 1925, inherited much of his buccaneering temperament from his father, William F. Buckley Sr. A larger-than-life figure—a staunch Catholic, lawyer, real estate investor, and Wall Street speculator—W.F. was expelled from Mexico in 1921 for his support for counterrevolutionaries. He experienced bankruptcy at age forty and recouped his fortunes with an oil concession in Venezuela. In 1924, he purchased an estate in Sharon, Connecticut, that he renamed Great Elm. An anti-Semite and a foe of the New Deal (which he saw as tantamount to Bolshevism), W.F.’s reactionary beliefs were echoed by his third son, Billy, who strove to show that he was loyal “to any of Father’s opinions.” 

Chief among them was the conviction that aiding Great Britain in its battle against Nazi Germany would be a colossal mistake. The household hero was, of course, Charles Lindbergh, the famed aviator and head of the America First movement (which was founded in 1940). Lindbergh had accepted the Service Cross of the German Eagle from Hermann Goering in 1938. Three years later, in a notorious speech in Des Moines called “Who Are the War Agitators?,” Lindbergh blamed Jewish influence for trying to push America into World War II. He and his followers asserted that a Fortress America was the way to go. According to Tanenhaus, “W.F. Buckley’s view, and thus his children’s, was that America’s business with foreign nations should be restricted to business in the most literal sense: trade and investment of the kind Buckley Sr. himself had pursued throughout his career.” At Millbrook, a prep school in New York, a 15-year-old Buckley delivered his first public speech, “In Defense of Charles Lindbergh,” a denunciation of the critics of the Lone Eagle for blackening the name of a true patriot whose final mission—warning Americans that a Nazi triumph in Europe was inevitable—should be heeded rather than scorned.

Buckley not only absorbed isolationist thinking but also imbibed Social Darwinist precepts. He was thus spellbound by Albert Jay Nock, a close friend of Buckley Sr.’s and frequent visitor to Great Elm who wore a cape, carried a walking stick, and wrote an influential book in 1935 whose lapidary title said it all—Our Enemy, the State. In Nock’s view, only something he called “the Remnant”—a conservative aristocracy—could safely steward America’s future fortunes. In 1943, Buckley Jr. wrote an essay that was very much in the Nockian spirit at Millbrook, lamenting that the “great defect in our democracy” was the “ultra-democratic” system of suffrage that allowed all citizens to vote. It was a conviction that he never entirely shed.

At Yale, which he entered in 1946 after serving in the U.S. Army, Buckley discovered a new mentor, Willmoore Kendall. Kendall was a riveting figure on campus, at least for a select few—a brilliantly talented professor of political theory with a choleric temperament, he relished debating (and molding) his young charges. In his short story “Mosby’s Memoirs,” Saul Bellow observed that Willis Mosby, who is based on Kendall, had “made some of the most interesting mistakes a man could make in the Twentieth Century.” A former Trotskyist turned anti-communist, Kendall—“an out-and-out conservative,” as Bellow put it—saw liberals as an internal enemy that needed to be suppressed.

Buckley, too, was an out-and-out conservative. Like Kendall, he admired McCarthy as someone who could administer vigilante justice and, incidentally, bring the nascent conservative movement to power. Once upon a time, Buckley had defended the free speech rights of Lindbergh and pooh-poohed the Nazi threat. Now that communism menaced America, the rules of the game had changed. The moment had arrived to exile the infidels who had effectively revoked their right to American citizenship. “The true American tradition,” Kendall wrote, “is less that of Fourth of July orations and our constitutional law textbooks, with their cluck-clucking over the so-called preferred freedoms, than, quite simply, that of riding somebody out of town on a rail.” Buckley and L. Brent Bozell Jr.—his closest friend at Yale and future brother-in-law—would eagerly spread that message for years to come. 

Tanenhaus puts his finger on the Buckley phenomenon: He “might or might not be the best new conservative writer and talker, but he was fast becoming its most entertaining—and possibly represented a new kind of public figure: not precisely a journalist or commentator or analyst but a performing ideologue.” Buckley, you could even say, pioneered a kind of shock therapy against liberals. After God and Man At Yale, his next great opportunity to needle them arrived with a book co-written with Bozell. It was conceived under the direction of Kendall, whom Tanenhaus vividly describes as “pacing the living room with a tumbler of whiskey and dropping cigarette ashes onto the carpet” as he “expounded on the ‘theoretic aspects of McCarthyism.’ ” When it appeared in 1954, the best-selling McCarthy and His Enemies insisted that McCarthyism formed “a movement around which men of good will and stern morality can close ranks.” 

McCarthy was baffled. He said that the book was “too intellectual” for him to comprehend. But its thrust was clear enough. While Buckley and Bozell were careful to play up the domestic communist peril, they noted that another group of Americans would eventually come into their gun sights—“Some day the patience of America may at last be exhausted, and we will strike out against Liberals.” Such were the “theoretic aspects of McCarthyism” as defined by its youthful adepts.

Buckley was a key part of McCarthy’s circle, as was Bozell, who worked as a speechwriter for him. In 1954, at a banquet at the Waldorf Astoria, Buckley praised McCarthy’s former aide Roy Cohn (a future mentor to Donald Trump) as someone who “chose not to observe the Racquet and Lawn Club rules for dealing with the Communists in our midst.” 

During Watergate, George F. Will, who had become the National Review’s Washington correspondent and a fierce critic of Richard Nixon, mordantly observed that Buckley and his colleagues “didn’t really like Nixon until it became clear he was a criminal.”

All along Buckley had recognized that McCarthy offered an essential glue for the fledgling conservative movement. Buckley never deviated from his fealty to McCarthy, writing a novel, The Redhunter, in 1999 that lauded his unstinting efforts to expose communist subversives working in the highest levels of government. According to Tanenhaus, “The carnal hunt for enemies within, the unmasking of their apologists and allies, real and more often fanciful, brought together diverse factions of a weak and fragmented movement in the growing war against the New Deal and its aftermath.” Bashing Ivy League professors was one thing. But targeting the larger universe of functionaries in the State Department, the CIA, and even the U.S. Army was another. If executed with sufficient rigor, it offered the chance to carry out a cultural and political revolution to subvert the subversives—what the right now refers to as the “deep state.” 

In 1955, Buckley, together with a crew of ex-communists that included Willi Schlamm and James Burnham, founded the National Review. It was an exercise in right-wing revanchism, flaying the Eisenhower administration for appeasing the Soviet Union, decrying liberal journalists and academics and complaining about an unelected ruling class that controlled government, schools, and entertainment. One contributor, Karl Hess, recalled, “At the time, I’m not too sure that any of us thought there was a lunatic fringe.” The essayist Dwight Macdonald astutely noted that the magazine was “not a conservative magazine precisely because it doesn’t stick to tradition, to conservative principles, but simply expresses the viewpoint of the Buckley type of anti-liberals, which are much too close to McCarthy for my taste.”

There was another issue on which Buckley and his chums separated themselves from the Eisenhower administration. That issue was race. In one of his most absorbing chapters, Tanenhaus scrutinizes Buckley’s experiences in Camden, South Carolina, where his family owned an estate called Kamschatka that had been the residence of the famous Civil War diarist Mary Boykin Chesnut. Tanenhaus reveals that in October 1949, Buckley’s father attended a secret meeting at the New York University Club that had been convened by the reactionary businessman Merwin Hart, an admirer of the Spanish dictator Francisco Franco and a virulent anti-Semite. The men Hart had assembled, Tanenhaus writes, “were looking ahead to a new domestic insurgency built on the campaign to preserve legal segregation in the South.” After the 1955 Brown v. Board Supreme Court decision, the Buckley family singlehandedly financed a weekly called the Camden News that opposed the desegregation of the South, disseminating the same themes as a local “Citizens’ Council” that was opposed to “race mixing.” Tanenhaus devotes much attention to the slippery arguments that the National Review engaged in to justify southern recalcitrance. It went from embracing majority rule to expel communists from America to endorsing John C. Calhoun’s abstruse constitutional defenses of the rights of the minority, which were centered on his nullification, or states’ rights, doctrine. Buckley viewed the issue of states’ rights as providing an “immensely rejuvenating theoretic substance” for the conservative movement. In 1957, he wrote an editorial headlined “Why the South Must Prevail.” In 1960, Kendall argued that Black Americans were actually a privileged class because they lived better than “most of the population of the world,” while Buckley argued that giving African Americans the right to vote would in essence disenfranchise whites. “In 1959,” Tanenhaus writes, “Buckley’s solution was the same one he had made as a teenager writing about the great ‘defect’ in democracy.” 

In his later years, Buckley remained capable of stirring up indignation, writing an op-ed in 1986 in The New York Times that anyone detected with AIDS should be tattooed in the upper forearm to “prevent the victimization of other homosexuals.”

Buckley’s cavalier approach to democracy manifested itself once more during Watergate. George F. Will, who had become the National Review’s Washington correspondent and a fierce critic of Richard Nixon, mordantly observed that his colleagues “didn’t really like Nixon until it became clear he was a criminal.” Buckley, who was a close friend of the CIA operative Howard Hunt, knew more about Watergate than almost anyone outside the White House. He knew that the crimes had started with Attorney General John Mitchell. He knew that they reached the White House. Like a loyal clubman, Tanenhaus writes, Buckley adhered to a code of silence, disclosing nothing to Congress or the Justice Department.

With the rise of Ronald Reagan, Buckley’s movement conservatism was at its zenith. But Buckley himself had long since peaked. He never received a cabinet position, and Reagan dissed him by failing to show up for the National Review’s annual dinner at the Plaza Hotel in 1980. “He had become the Upper East Side poster version of Reagan’s America,” Tanenhaus notes, “and seemed to be living not so much in a bubble as in a luxuriously appointed helium balloon.” His former protégé Garry Wills concluded that Buckley had become “unwillingly a dandy, a Nock, after all.” 

He remained capable of stirring up indignation, writing an op-ed in 1986 in The New York Times that anyone detected with AIDS should be tattooed in the upper forearm to “prevent the victimization of other homosexuals.” Buckley also supported Patrick J. Buchanan’s call for a culture war at the 1992 Republican convention in Houston. When it came to the National Review itself, Buckley blundered in 1997, appointing Rich Lowry (rather than David Brooks or David Frum) as editor solely because he “thought it would be wrong for the next editor to be other than a believing Christian.” 

Tanenhaus does not speculate on what Buckley would have made of Donald Trump, who has revivified many of the populist themes that Buckley and his pals had enunciated during the 1950s, when McCarthy served as their battering ram against the deep state. Instead, he concludes on an elegiac note. “Like F. Scott Fitzgerald, a writer he admired and identified with,” he writes, “Buckley was loath to give up his unwariness—an aspect of faith and also, as he found repeatedly, of faith’s dark twin, delusion. He was not always able to distinguish one from the other, but in that he was far from alone.” In so gracefully charting Buckley’s career and foibles, Tanenhaus’s magnificent work about Buckley goes a long way toward answering the query that Arthur M. Schlesinger Jr. originally posed in 1949. Anyone searching for a stable or politically responsible American conservatism should look elsewhere.

The post God and Man at Sea appeared first on Washington Monthly.

]]>
159217 Jul-25-Books-Tanehaus Buckley: The Life and the Revolution That Changed America by Sam Tanenhaus Random House, 1,040 pp.
What Hungary Lost When It Obeyed in Advance https://washingtonmonthly.com/2025/06/01/what-hungary-lost-when-it-obeyed-in-advance/ Sun, 01 Jun 2025 22:01:24 +0000 https://washingtonmonthly.com/?p=159223

The country’s leaders thought they could restore the nation’s lost glory through alliance with Hitler. By 1945, Budapest lay in ruins and 550,000 Hungarian Jews were dead.

The post What Hungary Lost When It Obeyed in Advance appeared first on Washington Monthly.

]]>

In March 1944, the Third Reich occupied Hungary, a country that until then had been one of Berlin’s closest allies. One of those who arrived with the invading forces was a lieutenant colonel in the SS named Kurt Becher. The SS leader Heinrich Himmler, one of Adolf Hitler’s closest confidants, had tasked Becher with stealing the wealth and resources of the country’s Jews. Himmler, in ferocious rivalry with other high-ranking Nazis, was keen to expand his economic empire. Becher, a rakish equestrian and collector of paramours, was happy to assist—making sure to line his pockets in the process.

The Last Days of Budapest: The Destruction of Europe’s Most Cosmopolitan Capital in World War II by Adam LeBor Public Affairs, 457 pp.

Becher’s masterstroke was expropriating the family-owned Weiss steelworks and arms factory, one of Hungary’s most critical industrial assets. By this point, the Nazis had already exterminated millions of European Jews, so it wasn’t hard for Becher to bully the Weiss clan into handing their property to the SS. (Judicious application of torture and threats likely helped move things along.) Once the paperwork was finished, Becher arranged for the Weiss family to be ferried to safety in Switzerland—just as he had promised. Becher “was well educated, courteous, a gentleman who kept his word,” recalled one member of the Weiss clan after the war. “He was polite and businesslike.” 

One might ask why this “gentleman” didn’t simply kill the owners and take what he wanted. The answer provided by the Budapest-based British journalist Adam LeBor is at the heart of his fascinating book, which charts the fate of a country that became one of Nazi Germany’s closest wartime allies—and then tried, unsuccessfully, to extricate itself when it realized it was on the losing side. Becher sealed his deal with the Weiss family when both Berlin and Budapest were striving to maintain the illusion that Hungary was independent despite its occupation. By preserving the facade of a conventional business transaction, he could preempt complaints from the Hungarian leadership about the theft of one of their leading industrial enterprises. “Germany,” LeBor writes, “still needed Hungary as an ally.” Hitler required the cooperation of Hungary’s bureaucracy to wipe out the country’s Jews—the last large Jewish population left in Europe. 

As LeBor shows, the story of Hungary’s relationship with Hitler’s Germany was a complicated one. Right up until the occupation, Budapest had spent years as a close ally of the Third Reich. Like Germany, Hungary emerged from World War I bitter about its treatment by the victorious powers. (Hungary had fought as part of the Austro-Hungarian Empire, but ended up as an independent nation after the empire’s collapse in 1918.) In the 1920 Treaty of Trianon, the Allies imposed a harsh peace that stripped Hungary of two-thirds of its territory and 3.3 million people. A brief communist revolution in 1919 and the far-right backlash that ensued reinforced a sense that Hungarian society faced an existential threat. The man who offered the only viable solution, in the eyes of many Hungarians, was Miklós Horthy, a rare figure to emerge from the war as a genuine hero. (Horthy had finished it as commander in chief of the Austro-Hungarian navy—hence his peculiar status as an ex-admiral who ruled a landlocked country.) Having assumed wide-ranging powers as the national leader in 1920, in the immediate aftermath of the war, he embodied the feudal and reactionary mind-set of the Hungarian elite, which included a virulent strain of anti-Semitism.

And yet, as LeBor notes, “Horthy was neither a Nazi nor a Fascist.” His ideology “was a synthesis of authoritarian national conservatism, while Hungary was a managed quasi-democracy.” There was a measure of press freedom, and opposition parties were seated in parliament. But elections were fixed. 

The admiral looked with suspicion on all revolutionary impulses, on the right as well as the left. He included a few Jews (rendered acceptable by their social status and conservative views) among his friends. Even so, Horthy’s desperation to restore his national greatness throughout the interwar period brought him into sync with Germany. Hitler, another ex-subject of the Hapsburg Empire, knew how to exploit Hungarian weaknesses. On several occasions, the Führer made a point of handing Horthy snippets from his territorial gains. Hungary was the fourth country to join the Axis powers (in November 1940). In 1941, Budapest allowed German troops to pass through its (ostensibly neutral) country to invade Yugoslavia. Hitler lost no time in rewarding the Hungarians: “Northern Yugoslavia was added to the lands returned to Hungary under the two Vienna awards and the occupation of Carpatho-Ruthenia,” writes Lebor. “By the end of April, Hungary had recovered 53 percent of the territories lost at Trianon.”

Budapest would not get much time to enjoy the new turf. Within months, its soldiers joined Hitler’s disastrous invasion of the Soviet Union. It was soon clear that Hungary’s partnership with the Third Reich would drag the country into the abyss. In November 1944, as Soviet forces pushed into central Europe, Hitler declared Budapest a “fortress city,” meaning its defenders would fight to the last man. Three months later, the former jewel of central Europe looked like Stalingrad. Around 40,000 German and Hungarian troops had been killed; 80,000 Soviet soldiers lost their lives. The savagery of the fighting helps to explain the viciousness of the Soviet invaders, who raped thousands of Hungarian women. Overall, some 300,000 Hungarian soldiers would lose their lives fighting for the Axis. 

But the number of civilians who died was even greater—most of them Jews murdered in the Holocaust. (The total also includes 23,000 Roma and Sinti killed in Auschwitz.) One million Jews lived in Hungary by the end of the 1930s (out of a population of around 15 million). By the war’s end, 550,000 were dead, gassed at Auschwitz, or worked to death in brutal “labor battalions.” The slaughter was engineered mainly by the “emigration expert” Adolf Eichmann, who had sent hundreds of thousands of other Jews to the killing centers.

The Jews of Budapest knew what this blue-eyed SS bureaucrat in the Hotel Majestic stood for. “When one of Eichmann’s officials asked for a piano, he was offered eight,” LeBor writes. “He replied that he simply wished to play the instrument, not open a shop.” One can easily imagine the terror Jewish leaders felt in his presence. They were even more astounded when Eichmann proposed a deal—an exchange of 1 million Jews for goods and materials. By this time, well into 1944, Eichmann’s boss, Himmler, was positioning himself for German defeat. Perhaps it was time to build “good will” with the Jews.

In November 1944, as Soviet forces pushed into central Europe, Hitler declared Budapest a “fortress city,” meaning that its defenders would fight to the last man. Three months later, the former jewel of central Europe looked like Stalingrad. Around 40,000 German and Hungarian troops had been killed.

That deal came to nothing. However, another would prove more successful, mediated by the “gentleman” Kurt Becher (Eichmann’s nemesis within the SS). Two Jewish leaders, Joel Brand and Rezső Kasztner, collected enough gold, jewels, and money to persuade Becher to authorize a VIP train out of Hungary for well-connected individuals; 1,700 Jews were ultimately saved. But Kasztner, who found a new home in Israel, would remain haunted by his dealings with the SS for the rest of his life. After the war, he vouched for Becher, helping him avoid Allied prosecution. Becher became a wealthy West German industrialist and, like so many other high-ranking Nazis, faced no accountability for his role in mass murder. Kasztner, in a sick twist of fate, was accused by fellow Jews of collaboration and was shot and killed by an assassin—an Israeli—in 1957.

Yet LeBor’s dark tale does offer moments of consolation. He has opened rich veins of reporting on the heroic efforts of neutral diplomats—the Spaniard Giorgio Perlasca, the Swiss Carl Lutz, and the Swede Raoul Wallenberg—who saved countless Jewish lives by offering havens in protected buildings or providing them with spurious citizenship. In many cases, LeBor notes, “the whole operation had no real basis in international law. It was all a giant bluff, a deadly game of wartime poker.” Wallenberg and Becher, who shared a love of horses, became friends. (The heroic Swede did not, however, manage to find common cause with the Soviets, who abducted him in January 1945; his fate is unknown to this day.)

Particularly compelling is LeBor’s account of Zionist fighters who passed as members of the fascist Arrow Cross militia during the occupation:

On one such mission, one of the young Jewish men in Arrow Cross uniform was recognized by someone from his hometown, who started yelling he was a Jew. A hostile crowd soon gathered, demanding that he be taken to the police station. At that moment, two more Arrow Cross militiamen appeared, drew their guns and jabbed the two fake Arrow Cross men in the back, and marched them off down a side street. Once out of sight of the mob, they embraced. The new Arrow Cross gunmen were actually members of Dror, one of the Zionist youth movements, who came to rescue their comrades. 

Many Hungarians protected their Jewish neighbors—a reminder that far from all collaborated with the Nazis. “In 1945, the Budapest Jewish community was the largest single group of survivors in Nazi-occupied Europe, although a large number emigrated to the West or Palestine,” LeBor writes. However, he notes, survivors faced a cold welcome from those who had appropriated their homes and possessions in their absence. As LeBor writes, Hungarians can’t claim the same inspiring record of behavior as the Danes or even the Bulgarians, who showed that it was possible to defy the Germans with determined policies.

It should come as little surprise that Holocaust commemorations in today’s Hungary are correspondingly mixed. Budapest boasts a Holocaust Memorial Center and a number of smaller memorials around the city, including a “row of metal shoes along the Danube embankment” where Hungarian Nazis shot their victims into the river. But LeBor notes a widespread nostalgia for the Horthy era and detects a “reluctance” to acknowledge Hungarian deportations of Jewish citizens to the death camps. (Hungarian Prime Minister Viktor Orbán, who has condemned anti-Semitism but has also been accused of flirting with it, has spent much of his term in office rehabilitating Horthy as a great Hungarian patriot.) When LeBor sets out to find the Majestic Hotel, where Adolf Eichmann once decided the fate of thousands, he discovers that it has been turned into an apartment building, with no indication of what occurred there. We should be thankful that Adam LeBor’s book fills the gaps.

The post What Hungary Lost When It Obeyed in Advance appeared first on Washington Monthly.

]]>
159223 Jul-25-Books-Lebor The Last Days of Budapest: The Destruction of Europe’s Most Cosmopolitan Capital in World War II by Adam LeBor Public Affairs, 457 pp.
The Supreme Court’s Immunity-to-Impunity Pipeline https://washingtonmonthly.com/2025/05/21/the-supreme-courts-immunity-to-impunity-pipeline/ Wed, 21 May 2025 09:00:00 +0000 https://washingtonmonthly.com/?p=159161 John Roberts greets President Donald Trump before Trump delivered his address to a joint session of Congress in the House Chamber of the U.S. Capitol in March.

Grievance dressed as law, history warped into license, today’s Court is not checking Trump’s authoritarianism—it’s codifying it.

The post The Supreme Court’s Immunity-to-Impunity Pipeline appeared first on Washington Monthly.

]]>
John Roberts greets President Donald Trump before Trump delivered his address to a joint session of Congress in the House Chamber of the U.S. Capitol in March.

As President Donald Trump sweeps the law aside, indiscriminately firing government employees, closing agencies and departments, pressuring law firms and universities, and seizing people residing lawfully in the country without due process, the nation’s eyes have turned to John Roberts. Surely, the chief justice of the Supreme Court, guardian of institutional legitimacy, will draw a constitutional red line. 

Yet Trump has thus far governed on the opposite assumption—that the Roberts Court won’t stop him—and he has good reason to believe as much. Nowhere is this clearer than in Trump v. United States, the presidential immunity case decided last year. Roberts overlooked what was in front of his nose, the January 6 assault on the Capitol, and instead penned an opinion that on its face immunized presidents against legal responsibility if they were engaged in “official acts.” Roberts insisted that this was necessary, lest presidents be afraid to make the tough decisions that often fall to them. For a Court that so frequently turns to history, one had to wonder just what history the Court was looking at. Presidents in the second half of the 20th century, even after Watergate, have not exactly been shy about claiming sweeping official power. 

Trump seems to have taken the ruling’s central lessons to heart: By way of executive order, clothing his action with the veneer of an “official act,” he has asked the Justice Department to open an investigation into Christopher Krebs, his former director of cybersecurity, for telling the truth to the American people. As Trump was lying about the 2020 election results, and falsely claiming election fraud and interference, Krebs, doing his job, insisted that, according to the evidence, the 2020 election was free and fair. For this, Trump is attempting to use the power of his presidency to punish Krebs. 

Should the chief justice be surprised? Is he surprised that Trump might ignore the Supreme Court and disregard the niceties of the Constitution? What will the Court decide with regard to the president’s blunderbuss tariffs, his shipping of people out of the country without due process, and his firing the heads of independent regulatory agencies without cause? 

Leah Litman gives us good reason to doubt that the Roberts Court will hem Trump in. Indeed, her new book, Lawless, seeks to demonstrate that this Court was constructed to advance a Republican agenda. When Justice Antonin Scalia passed away at the beginning of an election year, then Senate Majority Leader Mitch McConnell refused to hold a confirmation vote for Barack Obama’s Supreme Court appointee. Yet when Justice Ruth Bader Ginsburg died with early voting already underway in the 2020 election, McConnell muscled Justice Amy Coney Barrett’s confirmation through the Senate. Politics over rules. If Litman is right, there is little hope that the Court will tame a lawless administration; because it is driven by “conservative grievance,” not law. 

Lawless: How the Supreme Court Runs on Conservative Grievance, Fringe Theories, and Bad Vibes by Leah Litman Atria/One Signal Publishers, 320 pp.

A professor of law at the University of Michigan, former clerk to Justice Anthony Kennedy, and cohost of the hit podcast Strict Scrutiny, Litman is writing for fans, not to persuade perplexed Court observers. Each chapter is contrived around pop culture references, like “The Ken-Surrection of the Courts” and “The American Psychos on the Supreme Court”—the former referring to the Barbie movie and the Court’s rollback of women’s reproductive rights, and the latter referring to Christian Bale’s character in American Psycho and the Court’s “murder” of the administrative state. Lawless is filled with casual snark: “Okay, but that’s just like your opinion, bro(s)”; “Come on!”; “Maybe that is true … On Mars”; “Duh!”; and “O RLY?” Litman fans—and there are many—will love it. As an occasional listener to Strict Scrutiny, which is both insightful and entertaining, I found the snark somewhat distracting and juvenile. 

It’s too bad. Litman has a serious argument here: We should understand the Supreme Court as part of the Republican coalition, undoing wide swaths of law to advance the party’s political agenda. She is at her most compelling when illuminating how the Court’s opinions are part of this larger political and constitutional project, not isolated instances of constitutional interpretation. Consider the Court’s Dobbs decision, which overturned Roe v. Wade. There are long-standing jurisprudential criticisms of Roe, some of which can even trace their lineage back to Justice Ginsburg. Yet what Litman illustrates is that overturning Roe was part of a conservative vision that goes beyond reproductive rights. Abortion rights, as Litman argues, symbolized “feminism and feminists,” and Republicans sought to roll back advances in gender equality, which many saw as an attack on the family. William Rehnquist, as a young lawyer in the Nixon administration, insisted that outlawing sex discrimination would lead to the “dissolution of the family.” Samuel Alito similarly opposed changes that would bring women to Princeton, criticized the availability of birth control, and, as a young lawyer in the Reagan administration, argued for overturning Roe. Alito got his wish three decades later when he authored Dobbs

Dobbs is not disembodied jurisprudence that exists outside of politics. For Litman, it is part of a larger political effort to reject gender equality. This attitude—grievance, as Litman has it—is manifest in J. D. Vance’s quip about “childless cat women” or that women who do not have children are “sociopathic” and “shouldn’t get nearly the same voice” in politics as people with children. Dobbs is the opening salvo: Birth control, giving women the ability to make fundamental choices about family and careers, has come under attack in Republican-controlled states. Litman observes similar moves regarding LGBTQ rights, and highlights the Republican Party’s 2016 platform, which called for justices who would overrule not just Roe but Obergefell—the 2015 decision finding state laws that prohibited same-sex marriage unconstitutional—as well. 

Even the Supreme Court’s jurisprudential approach, relying on history and tradition, neglects gender. As Litman writes, 

Originalism supports a political project of taking away rights from groups that were not always included in American politics and society. It effectively maintains that a group possesses rights today only if the group possessed those rights in laws that were enacted in the 1700s or 1800s.

When the Fourteenth Amendment was ratified in 1868, women had few legal rights even within marriage, did not have the vote, and were prohibited from professions like law simply because they were women. As the Court put it in 1873, 

The natural and proper timidity and delicacy which belongs to the female sex evidently unfits it for many of the occupations of civil life. The constitution of the family organization, which is founded in the divine ordinance, as well as in the nature of things, indicates the domestic sphere as that which properly belongs to the domain and functions of womanhood.

Conservatives, some of whom have called for a “manly originalism,” as Litman helpfully reminds us, would undo gender equality as we know it. We are already witnessing tragic instances of women dying because abortion restrictions prohibit them from getting the medical care they need. 

Litman has a similarly powerful argument when it comes to the Court’s voting rights decisions. As a young lawyer in the Reagan administration, Roberts “produced memo after memo outlining objections to expanding the VRA,” drawing on opinions written by Rehnquist, for whom he had clerked, to narrow the reach of the act. When Roberts was situated in the center chair himself, his Shelby County opinion began a rollback of federal voting rights enforcement. Under Section 4 of the Voting Rights Amendment, states that had engaged in racially discriminatory practices in the past had to get federal approval before changing their voting rights laws. Roberts found this unconstitutional because it rested on outdated information. But the result was telling: States that were once part of the confederacy began altering their election laws in ways that disproportionately made it more difficult for racial minorities, particularly Black people, to vote. We do not have to think that this is Jim Crow II to find the pattern deeply disturbing. 

Yet past Supreme Courts—the New Deal and Warren Courts—also have roots as part of political coalitions. And these courts also instituted profound changes to constitutional law, setting aside precedents and offering novel constitutional understandings. Is the Roberts Court different on this front? 

At times, yes. Most notably, given Litman’s argument, the New Deal Court was in line with a large governing majority, and even the Warren Court, which is viewed too often as an anomaly, was embedded within the coalition of Kennedy-Johnson liberalism as it brought the white South into line with the rest of the country. Partly in contrast, the Roberts Court is supported at best by a slim plurality in a deeply divided country, and its decisions—overturning Roe, for instance—are often out of line with democratic sentiment. Plus, the current Court relies heavily on text and history but does so in a highly selective manner. On gun control and abortion rights, for instance, the Court has embraced a view of history that confines our understanding of the Fourteenth Amendment to the middle years of the 19th century. Yet confronted with whether Donald Trump had disqualified himself for office under Section 3 of the Fourteenth Amendment by instigating January 6 and the events around it that tried to keep him in power, the Court had little interest in history or original meaning. It would have been momentous to remove a presidential candidate from the ballot, and there was at least some reason to doubt that Trump had engaged in an insurrection under Section 3’s terms, but the Court simply neglected these foundational questions.

The Roberts Court is sweeping away well-established law based on a theory of the separation of powers that finds little grounding in constitutional text, history, or precedent.

Supreme Court opinions always raise contingencies and qualifications, but Litman demonstrates how the current Court too often leans into Republican causes. And they do so even if it requires dismantling the jurisprudential legacy of their judicial icon—Justice Scalia—on issues like the free exercise of religion. Here the Court has begun to insist not only that the establishment clause allows the states to directly fund religious institutions, but also that the free exercise clause commands it. Such an understanding finds little grounding in history or original meaning, and would have baffled James Madison, but it has become part of a conservative insistence that Christianity is prone to persecution in contemporary politics. 

Litman also chronicles how the Court has acted on long-standing Republican goals to limit the power of administrative agencies: overturning precedent which held that courts should defer to an agency’s reasonable interpretation of a statute when it was ambiguous; demanding that agencies show clear intent on the part of Congress if their regulations engage “major questions”; and questioning whether Congress is even allowed to delegate its power to agencies in the first place. These developments have limited the reach and power of executive branch agencies, placing that power instead in the hands of courts. Litman goes so far as to say the Supreme Court has “murdered” the administrative state. More compellingly, she insists that the Court is sweeping away well-established law based on a theory of the separation of powers that finds little grounding in constitutional text, history, or precedent. This is particularly true of the idea of non-delegation—that Congress cannot delegate its power to administrative agencies housed in the executive branch. The Court seems determined to revisit this issue, which could dismantle the administrative state and, notably, lead to widespread deregulation, which accords with the desires of leading Republican donors. 

If the Court has hemmed in administrative power, it is set to unleash the power of the president by way of “unitary” executive theory. The idea of the unitary executive is that the president gets complete control over the executive branch, including the power to remove government officers for any reason he sees fit. Does this mean that the president has control over all administrative agencies, including independent regulatory agencies like the Federal Reserve? Founding-era history does not even begin to support such claims. The first great discussion about removal, the Removal Debate of 1789, found arguments on all sides. Indeed, Alexander Hamilton, deemed the father of the unitary executive, insisted in Federalist 77 that the president needed Senate approval to remove officers as well as to appoint them. If we have settled on the precedent that presidents can remove political officers, we have also settled on the fact that Congress can insulate some officers that head independent agencies from presidential control. 

Trump wants to overturn this settlement. The White House has fired an extraordinary number of government employees, including lawyers who resisted Trump’s edicts in the name of the law. In Trump v. Wilcox, the president has asked the Court to endorse his constitutional authority to remove the heads of independent agencies at will. If the Roberts Court agrees, it would sweep away nearly a century of constitutional law and vest the president with kingly power to go along with the kingly immunity it has already bequeathed him. It remains to be seen whether the putative institutionalist John Roberts can assemble his Court to preserve institutions against this constitutional assault. Litman gives us reasons to be skeptical, and she is right to remind us that preserving constitutional institutions depends on political movements that work over the course of years. That is the struggle we find ourselves in today.

The post The Supreme Court’s Immunity-to-Impunity Pipeline appeared first on Washington Monthly.

]]>
159161 lawless-9781668054628_hr (2) Lawless: How the Supreme Court Runs on Conservative Grievance, Fringe Theories, and Bad Vibes by Leah Litman Atria/One Signal Publishers, 320 pp.
Clever the Twain Shall Meet https://washingtonmonthly.com/2025/05/15/clever-the-twain-shall-meet/ Thu, 15 May 2025 09:00:00 +0000 https://washingtonmonthly.com/?p=159106

An epic new biography of Samuel Clemens confirms the Missourian’s literary mastery but contends that the most important character he ever created was his own.

The post Clever the Twain Shall Meet appeared first on Washington Monthly.

]]>

In March, the John F. Kennedy Center for the Performing Arts bestowed its annual Mark Twain Prize for American Humor on the former late-night talk show host Conan O’Brien. At the awards ceremony, there was a frisson of tension in the audience. Just one month earlier, President Donald Trump had attacked the center’s programming as too “woke,” dismissed its leadership, and installed himself as chairman of the board, sparking widespread protest in the arts community.  

Mark Twain by Ron Chernow 
Penguin Press, 1,200 pp. 

On the Kennedy Center’s stage, many of the comedians roasting O’Brien also took aim at the president. But O’Brien, who has built his reputation as a nonpartisan observer, seemingly kept his powder dry. Instead, he spoke of the legacy of Mark Twain and the profound honor of receiving the award.  

“Don’t be distracted by the white suit and the cigar and the riverboat,” O’Brien chided. “Twain is alive, vibrant, and vitally relevant today.” He spoke of Twain’s hatred of bullies, his support for underdogs ranging from the formerly enslaved to Chinese immigrants—“he punched up, not down”—along with his hatred of intolerance, racism, and anti-Semitism, and his suspicion of populism and jingoism. Though O’Brien never mentioned Trump, the audience slowly awakened to the comedian’s subversive subtext. He concluded to sustained applause, “Twain wrote, ‘Patriotism is supporting your country all of the time, and your government when it deserves it.’” 

And thus with just a few spare sentences, Conan O’Brien made Mark Twain—the mustachioed, wisecracking author of America’s Gilded Age—once again relevant to American politics.  

So closely did O’Brien echo the praise of the writer’s “core principles” and irreverent wit that I wondered if someone had slipped him the advance galleys of Ron Chernow’s sparkling new biography. In Mark Twain, the acclaimed biographer takes a sledgehammer to the mythology of the quintessential American author. Like O’Brien, Chernow challenges the “sanitized view of a humorous man in a white suit, dispensing witticisms with a twinkling eye,” to demonstrate that Twain was among our nation’s most trenchant and biting social critics. Chernow asserts that “far from being a soft-shoe, cracker-barrel philosopher, he was a waspish man of decided opinions delivering hard and uncomfortable truths. His wit was laced with vinegar, not oil.”  

In his personal life, too, Twain belied his deliberately crafted, jovial public persona. Chernow wryly notes, “Mark Twain could serve as both a social critic of something and an exemplar of the very thing he criticized.” Indeed, the author who charmed audiences with his folksy demeanor and sought to create “a new democratic literature for ordinary people” while skewering elites and their institutions was exceptionally well read and cosmopolitan. He had lived for more than a decade in Europe, residing in grand châteaus and villas, and traveled the world, crossing the Atlantic 29 times. Twain’s barefoot boyhood on the banks of the Mississippi River was the stuff of legend, but the author spent most of his adulthood in New England, in a 25-room mansion with a fleet of servants, purchased with cash from his wealthy wife.  

“Mark Twain discarded the image of the writer as a contemplative being, living a cloistered existence, and thrust himself into the hurly-burly of American culture, capturing the wild, uproarious energy throbbing in the heartland. Probably no other American author has led such an eventful life.”

Chernow, the Pulitzer Prize– and National Book Award–winning writer of popular biographies of Ulysses S. Grant, George Washington, and Alexander Hamilton, tackles his complicated, often contradictory subject with nuance and prolific research. Chernow explores the author’s enormous oeuvre—a gratifying surprise for those whose familiarity with Twain resides in hazy middle school memories of The Adventures of Tom Sawyer.  

But this is no literary critique. Chernow asserts that Twain was “the most original character in American history,” and he is fascinated by him more as a man than as an author, reveling in his theatricality, both on the stage and off. He writes,  

Mark Twain discarded the image of the writer as a contemplative being, living a cloistered existence, and thrust himself into the hurly-burly of American culture, capturing the wild, uproarious energy throbbing in the heartland. Probably no other American author has led such an eventful life.  

Mark Twain is a massive brick of a book, comprising more than a thousand pages, and it is the mining of Twain’s private life and its intertwining with his public image that lends the book its physical heft and its most surprising and compelling content. Chernow concludes that “Mark Twain’s foremost creation—his richest and most complex gift to posterity—may well have been his own inimitable personality, the largest literary personality that America has produced.” 

Twain was easily the most famous writer in Gilded Age America, an era whose name was coined by Twain himself. He was the nation’s first celebrity author, a consummate storyteller, the nation’s most quoted person, and for many outside the U.S., the archetypal American. He mastered a vast array of literary formats, including travelogues, novels, essays, political tracts, plays, and historical romances. He created a uniquely American voice that captured the vernacular speech of the young nation. As famous an orator as a writer, Twain elevated storytelling into a wholly original theatrical genre, conducting speaking tours that attracted massive crowds and took him around the world, from Hawaii to Australia.  

Despite his myriad achievements, Twain felt unappreciated by the literary establishment, and chafed at the label “humorist,” fearing that audiences saw him as little more than vaudevillian. In 1907, Oxford University presented him with an honorary degree. For a man of humble origins who had left school at 12, Twain considered the diploma the pinnacle of his career, and he proudly donned the resplendent scarlet graduation gown to wear at formal events for the remainder of his life—including, charmingly, his daughter’s wedding. 

The broad outlines of Twain’s formative years are generally well known. Born Samuel Clemens to a downwardly mobile, slave-owning family in 1835, he was raised in the bustling river town of Hannibal, Missouri. It was a nostalgic setting the author returned to time and again in his writing, but rarely in person. Following his father’s death in 1847, Clemens went to work as a printer’s apprentice, and later, as a riverboat pilot, a job that fed his appetite for adventure and provided an endless stream of amusing anecdotes harvested for literary purposes. Clemens essentially sat out the Civil War—after a two-week stint as a Confederate soldier, he fled to the Nevada Territory, where he launched his writing career at the Territorial Enterprise, a newspaper catering to silver miners more interested in entertainment than reporting, and, as Chernow writes, “an ideal home for someone with Sam’s outsize powers of invention and casual relationship with facts.”  

In Nevada, Clemens adopted the nom de plume “Mark Twain,” a wink at his stint as a river pilot: On the Mississippi, a leadsman would yell out “mark twain!” upon lowering a weighted rope measuring 12 feet, to ascertain depth for safe passage. Twain’s comical sketches of the Far West and accounts of his journey established him as a travel writer. The Innocents Abroad, a humorous and irreverent travelogue of a five-month organized excursion to Europe and the Holy Land, became Twain’s best-selling book during his lifetime.  

Today, of course, Twain is revered as an iconic American novelist, whose books Tom Sawyer and Adventures of Huckleberry Finn are literary mainstays. Yet in his own time, these novels received mixed critical responses, with some reviewers troubled by the groundbreaking use of vernacular speech and questionable immorality. (Little Women’s Louisa May Alcott scolded, “If Mr. Clemens cannot think of something better to tell our pure-minded lads and lasses, he had best stop writing for them.”) Twain was so discouraged by the modest sales and lukewarm reaction to Tom Sawyer that he briefly swore off writing fiction. While modern readers might assume that these works constituted the apex of Twain’s career, Chernow covers their publication in the first third of his book, leaving the bulk of the biography to discuss Twain’s lesser-known writings, and his personal dramas. 

Twain reached the peak of his celebrity long after his fiction career had largely ended. A master of self-promotion, Twain tightly controlled the marketing of his books and lent his name and likeness to cigars, whiskey, and shirt collars. He created his own brand identity, with his shock of white hair, moustache, and signature white suits—he purportedly owned 14—and delighted in public recognition. For his massive speaking tours, he printed his own witty signage, with a trademark kicker that read, “Doors open at 7 o’clock[.] The trouble to begin at 8 o’clock.”  

Twain was his own best character. He sought the spotlight, and relished interactions with the press and the public. On the road, he held court with a gaggle of reporters from his hotel bed, clad in a nightshirt while puffing on a cigar. (He reportedly smoked 40 cigars a day and quipped, “It has always been my rule never to smoke when asleep, and never to refrain when awake.”) To his family’s mortification, the flamboyant Twain ofttimes created a spectacle. Once while in London, Twain strolled from his hotel to a public bath club in a state of undress, attracting a throng of spectators, including the press corps. After reading the London Times’ coverage, one of Twain’s daughters cabled from Connecticut, scolding, “Much worried remember proprieties.” 

Twain’s literary reputation rested on his masterful ability to mask deeper messages with a light veneer. The British playwright George Bernard Shaw riffed that Twain “has to put matters in such a way as to make people who would otherwise hang him believe he is joking.” Like Twain’s most beloved characters, the author himself often portrayed this same duality—a humorous facade that disguised a darker, more introspective core. It is at this crossroads—the intersection of Twain’s lighthearted persona and the darker underbelly—that the biography is most engaging.  

Despite his jovial front, Twain could be mercurial, petulant, and demanding. He had an explosive temper and was notoriously vindictive, holding grudges for decades. He was litigious, filing lawsuits against anyone he believed had crossed him, including his publisher, business associates, and family members. Twain filed three lawsuits, two civil and one criminal, against a woman he dubbed “the reptile”—the titled landlady of his 60-room rented Italian villa. After a farcical series of events involving leaking sewage, severed telephone lines, and a rabid donkey, Twain smirked, “I was losing my belief in hell until I got acquainted with the Countess Massiglia.”   

A master of self-promotion, Twain lent his name and likeness to cigars, whiskey, and shirt collars. He created his own brand identity, with his shock of white hair, moustache, and signature white suits. For his massive speaking tours, he printed his own witty signage: “Doors open at 7 o’clock[.] The trouble to begin at 8 o’clock.”

Haunted by his financially precarious childhood, Twain was a compulsive speculator, relentlessly chasing get-rich schemes, and with dismal results. He was fascinated by technology, and dreamed up endless inventions, including a bed clamp to prevent kicking off blankets, and a self-pasting scrapbook. He lost a fortune—nearly $6 million in today’s money—investing in a failed typesetting machine. Certain that his publisher was cheating him, Twain founded a rival publishing house, and managed it into bankruptcy. The heavy weight of debt hung over the Clemens family for decades, pushing them into European exile in the belief that it would be cheaper to maintain a household on the continent than in Connecticut, and forcing Twain to accept creatively unsatisfying but lucrative writing and speaking gigs. He was such a notoriously awful businessman that The Washington Post opined, “One good way to locate an unsafe investment is to find out whether Mark Twain has been permitted to get in on the ground floor.” 

Twain’s private life, too, was more complex than it appeared. Twain was a fiercely devoted husband to his wife, Livy, who served as both personal and professional partner. There was not, Chernow notes, “the least hint of scandal” in their marriage. Yet after her death, Twain pursued cringey friendships with dozens of adolescent girls he termed his “angelfish.” While Chernow stipulates that there is no evidence of sexual impropriety, the biographer openly struggles with reconciling these relationships, which involved young girls visiting the lonely widower for a week at a time, often unchaperoned. In Twain’s later years, letters exchanged with the angelfish comprised fully half of the esteemed author’s correspondence.  

Twain was a doting father to his three daughters when they were young, but grew stern and overprotective as they matured, reluctant to allow them to marry and have independent lives of their own. Later, as he pursued his angelfish, he became neglectful of his daughters’ escalating needs.  

Angelfish aside, the Clemens women deserve their own biography. Livy was an heiress to a coal fortune; Twain squandered much of her inheritance on poor investments. An invalid for much of her married life, Livy was periodically forbidden by her doctor from seeing Twain in person (but not other household members) because his manner was “too excitable” and thought to precipitate her heart palpitations. Banished from the bedroom, Twain slipped love notes to his wife throughout the day. As for the Clemens daughters, Twain plucked his eldest, Susy, from Bryn Mawr after her freshman year, likely in response to a romantic entanglement with a female classmate; she later died of meningitis after refusing conventional medical care, at her father’s direction. The two younger daughters, Clara and Jean—the youngest suffering with epilepsy—spent years of their life in sanatoriums, receiving “rest cures” that limited intellectual stimulation and contact with the outside world. Twain’s devoted, besotted secretary Isabel Lyon also occupied the dysfunctional familial orbit. Her own mental health travails, her fraught relationship with the Clemens daughters, her intimate (albeit asexual) codependence with the widowed Twain, and her ultimate betrayal of the author form a major subplot in the final third of the biography. 

Twain’s literary reputation rested on his masterful ability to mask deeper messages with a light veneer. The British playwright George Bernard Shaw riffed that Twain “has to put matters in such a way as to make people who would otherwise hang him believe he is joking.” This humorous facade disguised a darker, more introspective core.

Most readers will naturally be drawn to Chernow’s narratives of Twain’s writerly life, as he seeks connections between the author’s personal views and his large body of work. Mark Twain is not scholarly literary analysis, but there’s plenty of discussion of the author’s most familiar texts as well as dozens of lesser-known and unpublished writings to satisfy most.  

Chernow is particularly interested in tracing Twain’s growth in racial tolerance from the raw bigotry of his youth (in New York City for the first time, he was struck by the “mass of human vermin”), to his bold critique of slavery in Huckleberry Finn, to his vocal defense of Black, Jewish, and Indigenous people in his later career (even as he used vocabulary and stereotypical tropes that trouble the modern ear). Serendipitously, Chernow’s Mark Twain hit bookshelves the same week that James, Percival Everett’s revisionist take on Huckleberry Finn—retold from the perspective of Jim, an escaped enslaved man—won the Pulitzer Prize. For a current-day reader, when compared side by side, James is electrifying and Huckleberry Finn feels dated. Chernow acknowledges the challenges of reading not just Huckleberry Finn but also much of Twain’s writing with a 21st-century sensibility, even as he reminds readers of how radical Twain was for his time, and the ways in which his literature can inform our understanding of the past.  

In addition to race, Twain was remarkably progressive on a range of political issues—he was an early advocate for women’s suffrage, and spoke out against anti-Semitism, municipal corruption, and colonialism. But he also pulled his punches, ever conscious of the fine line he walked as a southerner-turned-Yankee confidant (and publisher) of former Union generals in post–Civil War America. While he boldly confronted slavery’s evils in Huckleberry Finn, he “shamefully ducked” the contemporary concerns of its aftermath, including Reconstruction and the rise of the Ku Klux Klan. Fervently opposed to lynching, Twain began work on an entire book on the subject, but ultimately abandoned the project, concluding, “I shouldn’t have even half a friend left, down there [the South], after it issued from the press.”  

As he grew older, and with his reputation secured, Twain felt emboldened to opine on current affairs. His interests were eclectic, ranging from alternative medicine to Philippine independence, and they provide us with a particular sight line on a moment in American history when the nation was moving from the travails of Reconstruction into the explosive growth of industrialization and nationalism. Twain had always evinced a harsh and bitter critique of society and its institutions in his fiction, but in his later years his work grew darker, drifting from humor and fiction toward essays lacking the softening mask of humor. He turned his pen against missionaries, Russian czars, the Catholic Church, and, particularly, imperialism. Not everyone was pleased. In response to Twain’s calls for the American withdrawal from the Philippines, Teddy Roosevelt called him a “prize idiot,” and The New York Times scolded Twain for “disregarding the grin of the funny man for the sour visage of the austere moralist.”   

More than a century after his death, Twain remains a mainstay of the literary canon, even as fewer of his books remain in circulation, and his most famous works are banned from many secondary schools. But Chernow is persuasive in his argument that even in his own lifetime, Twain was a larger-than-life character, “embodying something more than a great writer, that he had come to personify, at home and abroad, the country that had spawned him of which he stood as such a unique specimen.” 

The post Clever the Twain Shall Meet appeared first on Washington Monthly.

]]>
159106 Twain
Donald Trump Is Following the Sam Brownback Playbook https://washingtonmonthly.com/2025/04/28/donald-trump-is-following-the-sam-brownback-playbook/ Mon, 28 Apr 2025 08:59:00 +0000 https://washingtonmonthly.com/?p=158859

The former Kansas governor’s radical economic agenda undermined the state’s prosperity, decimated vital government services, tanked his popularity, and put a Democrat in power. Could the same fate await the current president?

The post Donald Trump Is Following the Sam Brownback Playbook appeared first on Washington Monthly.

]]>

A newly elected Republican chief executive, backed by a cadre of right-wing economists and think tanks, rides a populist wave to victory. Despite inheriting a relatively healthy economy, he unleashes a radical economic policy agenda and defunds government agencies, promising that his actions will usher in a new era of prosperity. 

Instead, the economy slows. Budgets collapse. Investors are spooked. Core services erode. Even allies defect. By the end, voters—many of whom once cheered the project—recoil, and a Democrat wins back power. 

That’s a version of the scenario Democrats are praying plays out in response to Donald Trump’s second-term agenda—and one Republicans are quietly fearing. 

But it is an equally accurate description of what’s happened in Kansas over the past decade and a half. In 2012, after riding a Tea Party wave to victory two years earlier, Kansas Governor Sam Brownback launched what he called “a real live experiment” in conservative governance, slashing income taxes, starving the state budget, and insisting that a burst of economic growth would follow to pay for it all. 

But that growth, and fresh tax revenue, never came. To fill his ballooning budget holes, Brownback squandered the state’s surplus, drained the rainy-day fund, fully privatized Medicaid, raided the state’s highway funds, decimated state agencies, and cut public education funding. When the service cutbacks started affecting people’s lives, even Brownback’s own supporters started to notice. Roads fell apart. Class sizes grew. And Brownback’s approval rating sank from 55 percent in his first month in office to 22 percent in 2016. Rather than finish out his second term, he took a job with the first Trump administration in 2017. The following year, Laura Kelly, a Democratic state legislative leader with a clear record of opposing Brownback’s reforms, won the governorship. Four years later, she was reelected, in part by tying her opponent to Brownback. To this day, she is one of the most popular governors in the country. 

No political parallel is perfect, of course. Brownback didn’t have the cult following Trump does. On the other hand, the chaos Brownback’s policies caused didn’t kick in for several years, whereas Trump’s shock-and-awe actions have already rattled the country and the world. 

We can’t predict the future. But we can look at Kansas. Brownback’s story offers Republicans a warning—and Democrats a possible path to victory. 

A lawyer who grew up on a family farm in eastern Kansas, Brownback ran for Congress as a moderate in 1994 but quickly became one of the most radical members of Newt Gingrich’s revolution—so radical, in fact, that he refused to sign the Contract with America because it wasn’t conservative enough. As ambitious as he was ideological, he ran for the Senate two years later, won, and was reelected in 2004, both times with heavy financial support from the Wichita-based Koch brothers. In the 2008 cycle, he entered the GOP presidential primaries as a favorite of the religious right (he had grown up Methodist but converted to Catholicism during his first term in the Senate), didn’t catch fire, and soon set his sights on the Kansas governorship, which he won in 2010 with a plan to make Kansas a model for small-government revival. 

In a Republican Party increasingly drawn to theatrics, Brownback was something rarer: a radical ideologue who followed through on his bluster. Whether campaigning for religious freedom abroad—he once emerged as a leading voice in Congress against the genocide in Darfur—or slashing taxes at home, Brownback brought the zeal of a missionary to every cause he took up. His theory for Kansas wasn’t new. It was the old supply-side catechism—cut taxes, shrink government, and wait for growth to swell the coffers and supercharge the economy. What distinguished Brownback was the scale of his ambition and his indifference to evidence. 

Brownback went all in on that spring day in 2012. He signed a law that slashed income tax rates, eliminated taxes on pass-through business income, and made no serious effort to offset the losses. Moderate Republicans had hoped to negotiate a more tempered version of the plan in conference committee. They didn’t. Brownback got everything he wanted. His tax regime would go into effect starting New Year’s Day, 2013. 

Conservative think tanks cheered the law on. Arthur Laffer—he of the infamous napkin diagram—had helped create the law. Grover Norquist gave his blessing. The Wall Street Journal trumpeted a Cato Institute report giving Brownback an A grade for “the biggest tax cut of any state in recent years relative to the size of its economy.” 

The laboratory was well chosen. Kansas was—and remains—deep red. Since 1940, it has voted for every Republican presidential nominee with the exception of Lyndon Johnson. Much like the libertarian bend in the American psyche, if you tell Kansans you are going to lower their taxes, cut government expenses, and make programs more efficient, they’ll give you a great deal of leeway. 

But beneath Kansas’s right-wing surface lies a persistent ideological split. The state’s Republican electorate includes both hard-line religious conservatives and a sizable bloc of moderates. That tension helps explain why Kansas has a long tradition of electing centrist Republicans—and even the occasional Democrat—to the governor’s mansion. 

“Kansas is Republican,” one former state Republican politician told me, “but it’s not crazy.” 

Almost immediately after the cuts went into effect, state revenues plunged. By June 2013, just six months into the new tax regime, Kansas was $338 million short of its revenue projections. Within a year, the state had lost nearly $700 million—an 11 percent drop in revenue. Credit downgrades followed. But the administration continued to insist that growth was coming. Brownback dismissed economists’ warnings and suppressed internal projections that contradicted the party line. 

Job growth didn’t materialize. Brownback had promised 100,000 new private-sector jobs. By 2014, a year into the “experiment,” the state had added fewer than 20,000—well behind national trends and trailing neighboring Missouri and Colorado. According to the Center on Budget and Policy Priorities, the year the cuts took effect, the Kansas economy shrank, even as the national economy grew. Then, in 2014, the state’s economy grew but still lagged the national average. By 2015, Kansas’s real GDP was growing at half the rate of the national economy. According to the CBPP, five of the previous six years before the tax cut, Kansas’s economy had grown faster than the nation’s as a whole.  

What did expand—quickly and predictably—was income tax avoidance. Pass-through entities multiplied practically overnight as doctors, lawyers, and accountants restructured to avoid paying state income taxes. “I’m making out like a bandit,” one attorney told The Kansas City Star. “And it’s completely unfair,” he complained, because his secretary, like most Kansans, still paid full freight. 

Before a critical mass of voters caught on, Brownback’s administration was able to paper over the fiscal collapse with budget chicanery. It drained the state’s budget surplus, moved money from the highway fund, cut the state’s pension fund contributions, and tightened eligibility requirements for programs assisting the poor. 

Brownback squandered the state’s surplus, drained the rainy-day fund, fully privatized Medicaid, raided the state’s highway funds, decimated state agencies, and cut public education funding. When the service cutbacks started affecting people’s lives, even Brownback’s own supporters started to notice.

The result was a slow bleed: Agencies were hollowed out gradually, and the most vulnerable Kansans suffered, but the daily lives of most voters hadn’t yet been directly affected by the time the 2014 elections rolled around. As Ashley All, a veteran Democratic operative in Kansas, told me, “They were able to conceal the scale of the crisis just long enough.” Brownback squeaked by to reelection, aided in part by a well-timed reminder that his opponent had visited a strip club 16 years earlier. 

It wasn’t until after the election that Brownback was forced to take actions that shocked average voters. Two weeks into his second term, another round of credit rating downgrades further raised the state’s borrowing costs and led to more budget cuts in education and infrastructure. Brownback slashed $45 million from school budgets to fill the growing hole. That forced some school districts, already cash strapped from the governor’s previous cuts, to begin sending students home at two p.m. Others canceled classes on Fridays or started summer vacation early. Working parents, left in a lurch, had to scramble to find child care. “We felt we didn’t have a choice,” said Janet Neufeld, superintendent of the Twin Valley School District, which then served 590 students in eastern Kansas. The district ended its academic year 12 days early that year. “It’s not good for kids, it’s not good for families,” Neufeld told Bloomberg. “There have been times when things were tight, but this is the absolute worst I’ve ever seen it,” Mike Sanders, superintendent of the Skyline School District, said at the time. Even parents in districts that didn’t impose such drastic service cuts read or heard about them in the media and worried that their schools would be next. 

Meanwhile, the governor shelved 24 highway expansion and modernization projects, including ones he’d campaigned on, such as a $29 million widening of a dangerous 24-mile stretch of K-177 known for rollover accidents and blind curves. During Brownback’s first term, almost 50 percent of all highway fatalities recorded in Kansas occurred on roads like K-177: two-lane highways with no controlled access. “New Budget Deal Postpones Upgrades on Lethal Two-Lane Roads, Harrowing Intersections,” read a Topeka Capital-Journal headline. Republican legislators joined Democrats in blasting the broken promise. 

And then came the great irony. Having promised sweeping tax relief, but facing a $400 million shortfall and another credit downgrade, Brownback signed the largest tax increase in Kansas history. Yet rather than place the burden on high-income individuals who had benefited from his first-term reductions, the new law raised the state sales tax, including on groceries, and hiked cigarette taxes, disproportionately penalizing middle- and lower-income Kansans. Conservative lawmakers wept as they voted affirmatively. (Cue the world’s smallest violin.) Brownback insisted that it wasn’t a tax hike—“Look at the totality,” he said. “Overall, it’s still a net tax cut.” But even Grover Norquist called it what it was: a violation of his “Never raise taxes” mandate, and one that particularly punished the poor. 

“It wasn’t abstract anymore,” Ashley All, who served as communications director for Laura Kelly’s 2018 campaign, told me. “Parents were paying extra at the grocery store and seeing 30 kids in a classroom. People were noticing delays on roads. You didn’t need to follow politics whatsoever to understand something had gone very wrong.” 

The experiment had obviously failed. But the governor wasn’t done defending it. He blamed “media elitists,” “leftists,” and “liberal outside groups” for misleading the public. “Some at the local Missouri-based paper in Kansas,” he warned in bold-faced type in one fund-raising letter, had “even taken to openly rooting for the Kansas economy to fail.” 

The political system finally caught up in the 2016 midterm elections. That year a group of moderate Republicans targeted Brownback-allied legislators in the primaries, running on a promise to restore school funding. It worked. More than a dozen Brownback allies lost their seats to moderates. In November, Democrats, also running on schools, flipped 12 additional seats. What emerged was an unusual, and tenuous, coalition: Democrats like Laura Kelly, a veteran state senator who had opposed the tax cuts from the beginning, moderate Republicans, and disaffected former Brownback allies who had decided enough was enough. 

Don Hineman, a rancher from Dighton and self-described “Eisenhower-Dole-Kassebaum Republican,” had broken with Brownback and won reelection that year. He served as house majority leader during the legislature’s repeal of the tax cuts. “By 2016, it was obvious to a lot of Kansans that things had gone too far,” Hineman told me. “You didn’t need a spreadsheet. You just had to look around.” 

In 2017, Hineman’s coalition succeeded where years of external pressure had failed. The legislature passed a repeal of major provisions of Brownback’s tax plan, including the notorious LLC loophole, over the governor’s veto. The votes were bipartisan. The Republican senate majority leader at the time, Jim Denning, who had supported the original tax bill in 2012, offered a quiet admission from the floor: “I’ve made many bad business decisions … ,” he said. “But I’ve always backed up and mopped up my mess.” 

This alone was remarkable: a Republican legislature repudiating the ideological centerpiece of its own governor’s administration, over his flailing objection. Moderation reasserted itself—not just as temperament, but as a governing necessity. If Brownback’s tax plan had been an experiment, the results were in—and the legislature was cleaning up the lab. 

Brownback left office early in 2017 to join the Trump administration as ambassador-at-large for international religious freedom. By then, his approval ratings had cratered into the 20s, and he was widely considered one of the most unpopular governors in the country. Yet his presence lingered over Kansas politics—not only because of what had happened, but also because of how distinctly it was tied to an ideological vision of government. 

Lieutenant Governor Jeff Colyer finished out the remainder of Brownback’s term but lost the 2018 Republican primary to Kris Kobach, a former law professor, national anti-immigration figure, and vice chair of Trump’s short-lived “election fraud” commission. Kobach was cut from the same cloth as Brownback, and he made little effort to distance himself. At a televised debate, when asked if he thought Brownback was a good governor, Kobach raised his hand. The moment became the centerpiece of Democrat Laura Kelly’s advertising strategy. 

“It was devastating,” Samantha Poetter, a current Republican state representative and Kobach’s communications director at the time, told me. “That debate clip probably played a large part in his downfall.” According to Poetter, the backlash wasn’t about abstract economics. “Probably 75 percent of it was the schools,” she said. “People felt that.” 

Kelly won by five points in a state Trump had carried by more than 20. Coupled with her shrewd promises to restore government functions and offer tax relief for the working class, the campaign message—Don’t go back to Brownback—stuck. 

And Kelly didn’t just campaign against Brownbackism—she governed against it. In her first term, she vetoed two Republican tax cut bills, warning that they would reopen the fiscal wounds Kansans had just started to stitch shut. She restored funding to schools, preserved a $1.1 billion surplus, paid down debt and pensions, and, importantly, steered relief toward working families—making good on her campaign promise to phase out the massive sales tax Brownback had imposed. 

In 2022, Kelly faced a different political environment. National Democrats were focused on abortion rights after Dobbs, but Kansas voters had already rejected an anti-abortion constitutional amendment. Kelly’s team made a strategic decision: They wouldn’t run a single TV ad on the issue. Instead, they went back to what worked. Her Republican opponent, Derek Schmidt, hadn’t spearheaded the Brownback agenda—but he hadn’t opposed it either. 

“We knew from our data that Brownback was still toxic.” Jordanna Zeigler, a senior adviser on Kelly’s campaign, told me. “The association was still damaging.” With middle-class tax cuts of her own to flaunt, Kelly tied her opponent to the miserable upper-income tax-cutting experiment of Brownback. 

Schmidt tried to nationalize the race, tying the Democratic governor to Joe Biden, inflation, and culture war issues like trans athletes. It didn’t work, because Kelly, who understood the politics of her state, neither ran nor governed as an ideological progressive. Her signature 2022 campaign ad featured her standing in the middle of a road bragging about how she worked with Republicans and Democrats to turn budget deficits into surpluses while funding schools and roads. She won reelection. 

As recently as last year, she vetoed a bipartisan tax cut bill that she believed endangered the budget’s long-term stability. In Kansas, fiscal restraint has turned out to be good politics. Her approval ratings now hover in the mid-50s, among the highest of any governor in the country. 

Brownback isn’t the only Republican once considered untouchable who later became politically toxic thanks to a fatal cocktail of ideological economic policy and government service failure. In early 2005, President George W. Bush, recently reelected with rock-solid support from an adoring conservative base, despite the war going badly in Iraq, announced a plan to partially privatize Social Security. The more average Americans learned about the plan, the more they hated it, and the GOP majority in Congress refused to bring the plan up for a vote. As that drama was playing out, Hurricane Katrina struck New Orleans. Blame for FEMA’s shamefully shambolic response fell on the president for putting inexperienced federal managers (“Brownie, you’re doing a heck of a job”) in charge of the agency. Bush’s approval ratings never recovered. When he left office in 2009 amid a collapsing economy, even hard-core conservatives had abandoned him. 

Donald Trump might seem different—invulnerable, able again and again to defy political gravity. But remember what lost him reelection in 2020: his chaotic mismanagement of the COVID pandemic. He returned to power in 2024 largely on voters’ memories of the strength of the economy during his first three years as president—an economy he inherited from Barack Obama. Now he is pushing through radical policies that threaten to undermine both the economy and vital government services. If both come to pass on his watch in ways that reach the everyday lives of Americans, it’s worth considering that he, too, could leave office in disgrace and become an anchor around the necks of the Republicans for years to come. 

Don Hineman, the moderate Republican rancher who led the repeal of Brownback’s tax cuts in the statehouse, sees the parallels. “I think what’s happening on the national level is pretty much a replay of what happened to Kansas between 2012 and 2018,” Hineman told me. “The anger is bubbling every day.” 

And that could redound to the long-term benefit of Democrats—if they find a leader who is savvy enough to understand and give average voters the policies they want and need. 

The post Donald Trump Is Following the Sam Brownback Playbook appeared first on Washington Monthly.

]]>
158859
How Reproductive Freedom Advocates Outsmarted the Anti-Abortion Movement https://washingtonmonthly.com/2025/04/25/how-reproductive-freedom-advocates-outsmarted-the-anti-abortion-movement/ Fri, 25 Apr 2025 09:00:00 +0000 https://washingtonmonthly.com/?p=158826

Since the reversal of Roe v. Wade, the number of abortions is up because of telehealth and the free sharing of mifepristone and misoprostol.

The post How Reproductive Freedom Advocates Outsmarted the Anti-Abortion Movement appeared first on Washington Monthly.

]]>

The number of abortions in the United States has skyrocketed in recent years despite the Supreme Court overturning constitutional abortion rights in 2022 and 18 states banning first-trimester abortions. Spurred by restrictions, abortion rights advocates have pioneered new abortion pill delivery routes both inside and outside of the medical system that have revolutionized abortion access in the U.S. in ways anti-abortion policy makers will likely not be able to stop.  

While 930,160 women obtained abortions through the medical system in 2020, that number grew to 1,033,740 in 2023, the first time in more than a decade that the number of abortions exceeded 1 million. In 2024, the number rose to 1,038,090, a .4 percent increase over 2023. In Mississippi—the state that brought us Dobbs, the case overturning Roe v. Wade—the number of those obtaining abortions from inside the medical system climbed from 2,850 in 2020 to 3,305 in 2023—a 16 percent increase in just three years. These numbers do not include abortions occurring outside of the medical system—estimated to be more than 100,000 since 2022. 

There are many reasons for the rise in abortions: more unwanted pregnancies because states with abortion bans have also reduced access to contraception; the deterioration of social safety net programs that enable women to afford to bring pregnancies to term, such as child care, Medicaid expansion, food programs, and income supports; the medical dangers of carrying a pregnancy to term in states with abortion bans because of higher obstacles to obtaining emergency care if they experience complications later in pregnancy; and increased criminalization of pregnancy post-Dobbs that may disincentivize carrying a pregnancy to term. However, none of these factors explain how patients are accessing abortion in the more restrictive legal environment. 

In my new book, Abortion Pills: U.S. History and Politics, I explain that the primary reasons for the expansion of abortion access despite state bans are the advent of telehealth abortion during the pandemic and the development of community support networks that provide free abortion pills to individuals living in restrictive states.  

Inside the medical system, women in states with bans are accessing telehealth abortion from providers in eight states—California, Colorado, Maine, Massachusetts, New York, Rhode Island, Vermont, and Washington—that have telehealth provider shield laws allowing clinicians to serve those seeking abortions no matter where in the U.S. they reside. Six shield state providers—Abuzz, Aid Access, Choice Rising, The MAP, A Safe Choice, and We Take Care of Us with dozens of clinicians—are now mailing abortion pills to more than 10,000 patients each month. In addition, two international telehealth providers—Abortion Pills in Private and Women on Web—serve individuals seeking abortions throughout the United States. 

Telehealth abortion care is more convenient, private, prompt, and affordable than in-clinic abortion services, which cost an average of $550. Telehealth abortion is available for $150 or less. For example, one provider—The Massachusetts Medication Abortion Access Project (The MAP), located in Cambridge, Massachusetts, and serving patients in all 50 states—charges $5 and up. Telehealth abortion has expanded access for those who live in rural areas and those who can’t afford to take time off from work to travel to cities, where most abortion clinics are located. Telehealth has also significantly increased the number of abortion providers. In Massachusetts, for example, the addition of 17 new telehealth clinics has more than doubled the number of abortion providers in the state. Telehealth providers offer prompt consultations, often asynchronously, and quick delivery of abortion pills by mail, compared to possible delays of a week or more for in-person appointments at brick-and-mortar abortion clinics. Telehealth abortion also means that patients do not have to walk through groups of aggressive—and often violent—anti-abortion protestors

Many seeking abortions are now obtaining pills outside of the medical system from community support networks. Red State Access shares information about networks providing free abortion pills in every U.S. state and territory that bans or restricts abortion. A new community network called DASH is now providing abortion pills to people in restrictive states as well. Another widely used avenue for obtaining abortion medication is through websites such as Private Emma, Pill Pulse, Medside 24, Privacy Pill Rx, Life Easy on Pills, and ybycmeds.com, which sell pills for much less than what it costs to obtain them from medical providers in the U.S. 

Multiple free and confidential resources are available to those seeking abortions, including the Miscarriage and Abortion Hotline, which provides caring and accurate information and support from experienced healthcare professionals; the Reprocare Healthline, offering anonymous peer-based support, medical information, and referrals; Self-Managed Abortion; Safe & Supported (SASS), a secure digital resource for accessing information and support; the Online Abortion Resource Squad (OARS), which moderates an abortion subreddit answering questions about abortion and providing support; and the Repro Legal Helpline, offering legal support. There’s even a chatbot, Charley, that can guide you through your options based on where you live and how far along your pregnancy is. The organization Plan C has a website that provides details on how to obtain abortion pills in all 50 states as well as U.S. territories and includes a vetted list of websites selling pills. 

The convenience and affordability of abortion pills are critical because half of those seeking abortions in the U.S. live in poverty, and another one-quarter have a low income. Sixty percent already have children. Telehealth abortion, community networks, and websites selling pills enable those who lack transportation or who can’t take time off from work or afford child care to obtain abortion services. 

These new avenues to abortion, both inside and outside of the medical system, work so well because the two medications used for abortion—mifepristone and misoprostol—are 97.4 percent effective and safer than Tylenol. Mifepristone blocks progesterone, which stops a pregnancy from developing. Then, 24 to 48 hours later, the patient takes misoprostol, which causes uterine contractions to expel the pregnancy tissue, usually within several hours.  

This abortion access revolution has been hard fought and a long time coming, as I document in my book. The French pharmaceutical company Roussel Uclaf patented mifepristone in 1980 and the French government approved the medication in 1988, but American anti-abortion politics delayed FDA approval here for more than a decade. When the agency finally approved mifepristone in 2000, it slapped the drug with medically unnecessary and burdensome restrictions, mainly due to anti-abortion political pressure and threats of violence. The FDA limited who could prescribe mifepristone and required patients to make multiple in-person visits to obtain the medication. As a result, American use of the pills lagged far behind Europe’s.  

Anti-abortion forces also blocked the development of mifepristone for treating fibroids, endometriosis, and postpartum depression, despite research showing its efficacy.  

Finally, between 2016 and 2021, advocates convinced the FDA to lift some of its restrictions on mifepristone for abortion, such as in-person dispensing. That opened the door to increased access. Today, more than two-thirds of abortions occurring inside of the medical system are done with medications, and 20 percent of all abortions are done through telehealth services. 

Anti-abortion activists and politicians are working on multiple fronts to restrict abortion pill access, including filing a federal lawsuit to reverse FDA approval of mifepristone to eliminate telehealth abortion or remove mifepristone from the market entirely; threatening to misuse an 1873 anti-obscenity law called the Comstock Act to prosecute anyone mailing abortion pills criminally; civilly suing and criminally indicting shield state telehealth abortion providers; criminalizing the possession of abortion pills by making them controlled substances; and criminalizing the use of abortion pills (which no state has yet done). Meanwhile, supporters of abortion rights have filed lawsuits to remove the remaining FDA restrictions on mifepristone, as well as challenging states’ limits on abortion pills, arguing that they are preempted by federal law. In addition, they’re defending, strengthening, and expanding shield laws for telehealth abortion providers. 

In my book, I chronicle the creative, determined, and courageous activists who brought abortion pills to the country in the 1990s and who have recently revolutionized abortion pill access despite growing restrictions on abortion in many states. “Abortion pills are here to stay,” Elisa Wells, Plan C’s cofounder and codirector, said after Donald Trump was elected. “Community distribution networks and overseas providers will remain intact, and abortion pills will continue to come into the country.” 

The post How Reproductive Freedom Advocates Outsmarted the Anti-Abortion Movement appeared first on Washington Monthly.

]]>
158826
In Defense of Everything-Bagel Liberalism https://washingtonmonthly.com/2025/04/24/in-defense-of-everything-bagel-liberalism/ Thu, 24 Apr 2025 09:00:00 +0000 https://washingtonmonthly.com/?p=158816

Critics warned that the Biden administration put so many conditions on the grants it offered to semiconductor manufacturers that the centerpiece of its industrial policy would fail. Those conditions turned out to be key to the program’s success.

The post In Defense of Everything-Bagel Liberalism appeared first on Washington Monthly.

]]>

In a 2023 New York Times column, Ezra Klein coined the term “everything-bagel liberalism” to describe the phenomenon of projects that liberals favor getting weighed down by seemingly ancillary requirements liberals impose on them. He noted, for instance, that in California, strict labor and environmental standards embedded in various pieces of legislation passed by the Democratic legislature over the years have made it too costly and time-consuming to build subsidized housing for the homeless. 

This same problem, he warned, would imperil one of the centerpieces of the Biden administration’s economic policy: the CHIPS and Science Act. Klein praised the aims of this flagship legislation to reshore semiconductor manufacturing with a $39 billion fund to attract companies to the United States. But in the administration’s implementation of the bill, he discovered a plethora of conditions the government would have to use to evaluate prospective grantees that worried him—ranging from applicants’ commitments to workforce development for racial minorities and women, to environmentally friendly operations, to community investments in transit, housing, and child care, and more. Klein warned that this litany of priorities that were not directly related to the main task at hand—bringing semiconductor manufacturing to the U.S.—risked overwhelming the core enterprise. “The government is adding subsidies with one hand,” he wrote, “and layering on requirements with the other.” An industrial policy mission as complex as bringing back semiconductor manufacturing calls for “an intensity of focus that liberalism often lacks,” he concluded.

Klein resumed his critique of the Biden administration’s industrial policy approach in his book Abundance, coauthored with The Atlantic’s Derek Thompson and released in March—and the two were hardly alone. When the administration first issued the CHIPS Act funding guidelines, Catherine Rampell of The Washington Post wrote that the requirements were the latest example of how “virtually every ambitious program gets saddled with too many other unrelated objectives to do any of them well.” Matthew Yglesias wrote in his Slow Boring newsletter that if promoting semiconductor manufacturing is an important national goal, then the administration should “act like it’s important and say that other causes need to fall by the wayside.” The blogger Noah Smith likewise bemoaned liberals’ “laundry list” habit of “inserting every goal into every project.” Senator Ted Cruz and a dozen of his Republican colleagues got in on the action too, blasting the administration in a letter for including “application requirements that are ancillary to accomplishing Congress’s stated goals or otherwise squander taxpayer dollars on social policy objectives.”

The argument that liberals inadvertently produce scarcity by entangling their own projects in a web of competing priorities is a central tenet of what’s become known as “abundance liberalism”—an emerging policy movement, of which Klein and Thompson are among its most prominent proponents. And in certain contexts, they are right. For instance, I’ve argued in these pages in favor of permitting reform to accelerate the building of clean-energy infrastructure, and elsewhere have endorsed using the Defense Production Act to bypass procedural barriers for important green projects.

But just as too many conditions can doom a project in some contexts, so can too few conditions spell doom in others. Indeed, despite all of the op-ed page consternation, the CHIPS Act’s everything-bagel conditions turned out to be a nothingburger. When stacked up against high-profile economic development failures of the recent past, the CHIPS Act shows that a little bit of everything-bagel liberalism is actually quite useful to crafting effective and politically sustainable industrial policy. When government is handing out vast sums of taxpayer dollars to private industry, the best and only way to achieve public ends is to put the right conditions on the money.

It’ll be years, of course, before we can know for certain whether Joe Biden’s CHIPS Act achieves its goal of rebuilding an internationally competitive advanced microprocessor fabrication industry on U.S. soil—and whether it can withstand Donald Trump’s vendetta against the “horrible” law enacted by his predecessor. But by many measures, the CHIPS Act is already achieving its aim of creating a domestic alternative to the existing geopolitically fraught Taiwanese-dominated semiconductor supply chain. The act’s semiconductor grants program wound up “vastly oversubscribed,” according to Biden’s commerce secretary, Gina Raimondo. The Department of Commerce received some 600 statements of interest from companies seeking to build semiconductor fabrication facilities (known colloquially as “fabs”) in the United States, collectively requesting $70 billion in funding—nearly double the amount available under the law. To take one prominent example, the Taiwan Semiconductor Manufacturing Company (TSMC) received a grant to create a new plant in Phoenix, which is now up and running and achieving chip production yields that outpace the company’s Taiwan plants—a key mark of success. In March, the company announced plans to more than double its investment in Arizona.

The administration’s supposed gauntlet of conditions held back very few suitors. In fact, many of the “extraneous” conditions were directly tied to significant industry needs. 

The argument that Democrats produce scarcity by entangling their own projects in a web of competing priorities is a central tenet of “abundance liberals” like Ezra Klein. But if too many conditions can doom a project in some contexts, so can too few conditions spell doom in others.

In Abundance, Klein and Thompson singled out the Biden administration’s funding preference for CHIPs Act applicants who “create equitable work force pathways for economically disadvantaged individuals,” including “building new pipelines for workers.” However, workforce development is a critical concern for semiconductor reshoring: The meager existing domestic chip industry in the U.S. means that the country has few skilled workers available to staff new fabs, posing a stark bottleneck for new investment. Indeed, in 2023, TSMC’s new fab in Phoenix was forced to delay production in part because of the shortage of skilled workers. 

Nonetheless, Klein and Thompson (along with many other critics) expressed particular dismay over the administration’s consideration of whether a CHIPS Act applicant would arrange for either onsite or local child care for its workers. Yet the industry itself seemed entirely nonplussed by this requirement. In an interview with Oren Cass of American Compass, Scott Gatzemeier, an executive at the semiconductor manufacturer Micron Technologies, explained, “What keeps me up at night [is] getting the workforce for these fabs and building the workforce for the future,” noting that his firm was actively exploring “nontraditional pathways [to] reach out and get more people coming in … At Micron, we’re building a daycare right across the street from our [Idaho] headquarters.” He added, “We’ve already purchased the land [at our coming] New York site to do that [too].” Semiconductor firms are providing perks like child care because it’s a smart strategic choice to attract and retain the skilled workers they need to succeed.

Klein and Thompson’s list of apparently dispensable funding preferences also included the administration’s encouragement of a “climate and environmental responsibility plan.” However, environmental impacts are hardly an abstract concern for chipmaking: Semiconductor manufacturing is a famously water-intensive process, with the typical fab consuming volumes of water each day equivalent to roughly 30,000 households’ usage. When Micron was deciding where to open its next production site, it ultimately chose Clay, New York, over Austin, Texas, in part because of central New York’s superior access to a reliable water supply. Indeed, as Micron explained in a 2024 financial filing, its “manufacturing and other operations in locations subject to natural occurrences and possible climate changes” could “result in increased costs, or disruptions to our manufacturing operations.” If anything, legitimate environmental concerns weren’t emphasized enough in the CHIPS application process, with TSMC opting to locate in notoriously water-constrained Arizona.

Many of the CHIPS Act conditions simply reflected the government doing due diligence on potential grantees, ensuring that they were properly prepared to plan for workforce development, environmental sustainability, and other foreseeable issues that could otherwise squander federal funding.

Abundance also made at least one notable omission from its list of Biden administration conditions for semiconductor firms. According to a 2023 New York Times report, Raimondo informed governors that the administration would favor applications from firms that had received state and local assurances “to have permitting sped through” normal review processes to expedite the construction of new fabs. That abundance-friendly funding criterion was reported by none other than Ezra Klein.

All of which makes the CHIPS Act an awkward poster child for everything-bagel liberalism run amok. Indeed, in fretting about condition overload, Klein and Thompson might well have their worries backward. To understand the value of industrial and economic policy with conditions, consider what such a policy without conditions looks like: the brief saga of Amazon in New York City.

In 2017, Amazon announced a nationwide competition to determine where it would build its second headquarters (“HQ2”). The competition prompted a stampede of nearly 240 cities applying to host the e-commerce giant, jockeying in a race to the bottom to offer the company billions in tax breaks and other public incentives. New Jersey and Maryland each reportedly offered a record $7 billion in incentives to the company. Within a year, Amazon announced that it would split the new headquarters into two locations: New York City and Arlington, Virginia. In New York, Amazon was set to receive $3 billion in state and city incentives for up to 8 million square feet of office space on the East River waterfront in Long Island City, Queens, replete with a private helipad for CEO Jeff Bezos.

Amazon’s new headquarters would have displaced a planned mixed-use neighborhood—including new apartments with affordable units—and a school lunch distribution center. For its part, the company pledged a handful of community benefits, including office space for tech start-ups, internships and résumé workshops, a new community school, and green space.

After negotiating the deal in secret, New York’s political leaders—Governor Andrew Cuomo, and Mayor Bill de Blasio—and Amazon appeared to expect a celebratory reception. But the surprise deal quickly provoked public outrage. Residents objected to the eye-popping tax breaks offered to one of the world’s richest companies, and the lack of transparency surrounding the deal—something Amazon had insisted on by making public negotiators sign nondisclosure agreements. Elected officials condemned the deal as diverting taxpayer funds to corporate welfare while pressing city issues like affordable housing and deteriorating public transit went unaddressed (and could be worsened by Amazon’s arrival). Representative Alexandria Ocasio-Cortez tweeted, “Amazon is a billion-dollar company. The idea that it will receive hundreds of millions of dollars in tax breaks at a time when our subway is crumbling and our communities need MORE investment, not less, is extremely concerning to residents here.” State Senator Michael Gianaris added, “If Amazon wants to come here, they should be talking about subsidizing Long Island City, not squeezing subsidies out of New York state or New York City.” Others saw behavior like Amazon’s anti-union history and its successful effort to kill a Seattle tax that would have funded affordable housing for homeless people as troubling red flags for the city’s future with the company.

By February 2019, the deal was dead. Amazon announced that it was pulling out of New York, and Virginia would be the sole location for HQ2. (Amazon was reportedly “upset at even [the] moderate level of resistance” it faced in New York.) The company made its decision shortly after the New York State Senate selected Gianaris, a vocal Amazon opponent, to sit on a state public authority board with the power to veto the deal absent unanimous approval. 

Some abundance-oriented commentators have pointed to this breaking point as an example of how vetocracy can stifle economic development. In Why Nothing Works: Who Killed Progress—And How to Bring It Back, Marc Dunkelman concludes, “Whether or not Amazon’s proposal was worthwhile, no single state senator should have the authority to scuttle a deal of this magnitude on his own … In a nation desperate to build, progressivism’s imbalance had left the government unable to deliver.”

That may be so. But New York lawmakers took a different lesson from the HQ2 debacle. In 2021, state legislators set out to craft new economic development legislation to draw semiconductor manufacturers to New Yorka state-level set of incentives separate from those being developed under the federal CHIPS Act around the same time. And lawmakers had the Amazon fracas front of mind: state Assemblyman Al Stirpe, one of the legislative sponsors of the New York bill, wanted to ensure that any financial incentives for chipmakers came with attached requirements to benefit the community at large. “There is a concern about the Amazon blowup from a year and a half ago in New York City,” Stirpe told reporters. “You want to make sure people who feel that way about incentives for corporations know that we’re very concerned about inclusivity and this project will benefit everybody.” 

In August 2022, New York enacted the Green CHIPS Act, which created a $10 billion fund to provide subsidies to chipmakers that moved to the state. To be eligible for funding, firms had to create at least 500 new jobs, reduce their greenhouse gas emissions, provide workforce development opportunities for low-income New Yorkers, and pay prevailing wages for plant construction. “We thought if we were going to give a business an incentive, which is what they want, let’s get something that we want in terms of a sustainable project,” Stirpe said.

The Green CHIPS Act took the anti-Amazon approach. Successful economic development projects required socially beneficial terms and conditions in exchange for public subsidies to corporations. And where HQ2 liberalism failed, everything-bagel liberalism succeeded. In October 2022, on the heels of both the federal and state CHIPS laws, Micron announced its new $100 billion investment in central New York, a project that promises to transform the region.

The juxtaposition of these two approaches—no strings attached for Amazon on the one hand, and some strings attached for semiconductors on the other—is telling. When the government undertakes industrial policy, attaching smart conditions is an important, even essential, predicate for success for several reasons.

First, conditions can help secure industrial policy’s democratic legitimacy. New York’s Amazon deal met resistance because of the top-secret negotiations and because the HQ2 search process smacked of unseemly corporate power: a mega-company dictating development terms to the public, instead of the other way around. Conditions that alter firm behavior and ensure that a project advances important social purposes signal that the public is driving industrial policy, rather than letting the state acquiesce to corporate capture. 

Policy makers must find a sweet spot where a project pencils out financially for the firm while the attached conditions hold water democratically for the public. No-strings-attached public money to corporations is generally not democratically viable or desirable.

Second, the federal government can set high national standards by employing conditional industrial policy. The design of the HQ2 competition played states and cities off of each other to up the subsidy ante for Amazon’s benefit. Because states are in competition with one another for any company’s business, big corporations can drive a bidding war. That’s less true at the national level. While the federal government has to worry about firms moving abroad, it’s still much easier for companies to move across state lines than national ones. That gives the federal government more bargaining power to add conditions consistent with community and coalitional values on subsidies to corporations. While New York was able to put a small number of conditions on its CHIPS bill, the federal government was able to demand far more from firms benefiting from its largesse.

Third, conditions can protect the government’s investment. Many of the CHIPS Act conditions simply reflected the government doing due diligence on potential grantees, ensuring that they were properly prepared to plan for workforce development, environmental sustainability, and other foreseeable issues that could otherwise squander federal funding. Conditions can also include corporate guardrails to make sure that federal subsidies are not siphoned off by shareholders, as the economists Lenore Palladino and Isabel Estevez have explained. Those guardrails can take the form of restrictions on share buybacks and shareholder payments, protections for workers, and public equity stakes. Protective conditions increase the government’s confidence that the investment will achieve its aims, thereby decreasing the risk that the investment fails and becomes right-wing fodder as another liberal wasteful boondoggle.

Other investment protections can seek to maximize the benefit of public subsidies for the American economy. For example, the clean vehicle tax credit under the Inflation Reduction Act included domestic content conditions requiring eligible automakers to substantially source key EV components from the United States or its free trade partners. This leveraged an industrial policy subsidy to spur additional domestic economic activity—in this instance, EV battery production and critical mineral development. In doing so, it also advanced national security priorities by creating a firewall to limit federal funds from indirectly subsidizing foreign supply chains in China.

And fourth, conditions distinguish industrial policy from mere corporate welfare by securing a direct public return on investment. Echoing criticisms of HQ2, in 2022 Senator Bernie Sanders condemned the not-yet-passed CHIPS Act as “corporate welfare,” saying that “taxpayer handout[s]” would go to “five companies [that] made $70 billion in profits last year.” While he supported industrial policy, Sanders said, “industrial policy to me means cooperation between the government and the private sector. Cooperation. It does not mean the government providing massive amounts of corporate welfare to profitable corporations without getting anything in return.” He demanded that CHIPS Act subsidies be limited to only those companies that “agree to issue warrants or equity stakes to the Federal Government. If private companies are going to benefit from generous taxpayer subsidies, the financial gains made by these companies must be shared with the American people, not just wealthy shareholders.” While stopping short of taking an equity stake in semiconductor firms, the Biden administration did impose an “upside sharing” requirement such that companies receiving funding under the CHIPS Act must share any profits achieved above a certain threshold with the federal government. 

Industrial policy that benefits private companies for a public purpose ought to yield direct public returns. As the economists Mariana Mazzucato and Dani Rodrik put it, “conditions create a healthy tension between public and private so that subsidies are part of a ‘deal’ rather than a blanket handout.” For instance, when the Mexican government sought to curb grocery inflation, it granted industry concessions by waiving import tariffs, freezing transportation fees, and relieving regulatory burdens—but on the condition that firms agree to consumer-facing price caps on food and essentials. Likewise, when the United States sought to expand health coverage through the Affordable Care Act, it subsidized the insurance industry—but insisted that insurers cover a stronger set of essential health benefits, offer plans to everyone regardless of preexisting conditions, and spend a certain percentage of premium dollars on actual care, among other requirements. Indeed, industrial policy with conditions bears some resemblance to utility regulation: Private firms that benefit from government protection or investment must in turn comply with special rules and requirements for the benefit of the public.

Klein and Thompson have a point that liberals need to pay as much attention to supply as they do to demand. But their prescriptions often only get us part of the way to actual abundance. With housing, exclusionary zoning and NIMBYism do constrain supply (including for public housing) in many high-growth areas. But those obstacles don’t explain why home construction never fully recovered from the 2008 housing crash. With clean energy, old green laws are slowing new green projects. But so are financing challenges, input bottlenecks, fickle market-sensitive private developers, and oppositional investor-owned utilities. Or take semiconductors: Regulatory and permitting processes do slow the build-out of new domestic fabs. But as TSMC showed, so do workforce constraints.

Abundance can be hampered by more than just government red tape. And securing abundance will require a pragmatic blend of catalyzed private power (whether through subsidization or regulatory reform) and directive public power (to ensure that the private sector doesn’t subvert or bungle public aims, or siphon off public dollars). 

“Everything-bagel liberalism”—industrial policy with conditions—provides that blend. The government provides the direction, but the private sector generates the output.

Policy makers must find a sweet spot where a project pencils out financially for the firm while the attached conditions hold water democratically for the public. No-strings-attached public money to corporations is generally not democratically viable or desirable. Yet a tangle of attached strings risks strangling a project for a firm. 

The question is not whether to do conditional industrial policy, but how much to do. How many conditions should be attached? And which ones? Maybe there’s a role for a new industrial policy agency that can get a stronger grasp of industry dynamics in order to better understand which conditions are valuable and which might be detrimental. The most obvious conditions to include are those that simply adopt an industry’s own best practices—like in the semiconductor context, proactive workforce development and environmental sustainability. And there’s also a strong democratic case for protective conditions that get something back for the public—like a share of profits, or a commitment to R&D investment—even though they might raise hackles from a firm’s shareholders.

The aim, according to Mazzucato and Rodrik, is to “maximize the public value of public investments.” Positive-sum industrial policy enhances the collective good, alleviating scarcity and want. The road to abundance is lined with everything bagels.

The post In Defense of Everything-Bagel Liberalism appeared first on Washington Monthly.

]]>
158816