The Federal Reserve System was created in 1913 and soon did what
central banks almost always do: it started printing lots of money.
During World War I the Bank of England inflated its money supply, and
as a result a significant amount of gold flowed out of Great Britain
to the United States in the 1920s. Instead of controlling their
monetary expansion, the British authorities put pressure on the U.S.
government to expand its own money supply. The Fed happily complied
and engaged in significant monetary inflation throughout most of the
decade. This extra liquidity helped fuel the stock-market boom, but by
1928 it was obvious that monetary expansion had gone too far, and the
Fed changed course. What ensued was a massive contraction of the money
supply, followed by many years of incoherent and incompetent monetary
policy that strongly contributed to the length and severity of the
Great Depression.
When the Federal Reserve started restraining the money supply, stock-
market growth declined and soon a pull-back developed. In the 1920s
the use of debt by both banks and individuals to invest in the stock
market was common. Today Federal Reserve margin requirements limit
debt for stock for purchases to 50 percent. But during the 1920s
leverage rates of up to 90 percent debt were not uncommon. When the
stock market started to contract, many individuals received margin
calls. They had to deliver more money to their brokers or their shares
would be sold. Since many individuals did not have the equity to cover
their margin positions, their shares were sold, causing further market
declines and further margin calls. The stock market crash of 1929 was
the result.
In the 1920s banks were allowed to invest their assets in the stock
market, so many banks went bankrupt as well. The Glass-Steagall Act of
1933, one of the many new Depression-era banking regulations, made
equity positions for banks illegal. This act split banks into two
types: commercial banks and investment banks. (This law was repealed
in 1999.)
The events of 1929, largely caused by government intervention in the
economy, were used as an excuse for much more government intervention
in the economy. Because of the extreme overprinting of money during
the 1920s, the Fed had nowhere near the amount of gold necessary to
cover all the claims it had printed. The American people were wise
enough not to trust the banking system and attempted to get their gold
out of America ’s banks. The Roosevelt administration responded by
closing down the banks (declaring a banking holiday) and defaulting on
the gold standard. In 1933 President Roosevelt signed Executive Order
6102 making private ownership of gold for investment purposes in the
United States illegal. Roosevelt also seized most of the American
citizens’ gold coins, melted them down, and put them in Fort Knox.
At the Bretton Woods conference in 1944 the world was again
temporarily on something resembling the gold standard. All the world’s
major currencies were fixed to the dollar, and the dollar was fixed to
gold. An ounce of gold was supposed to be worth $35 forever after. The
U.S. central bank then proceeded again to print too much money.
Throughout the 1960s various European central banks, particularly the
French, redeemed their dollars for gold. The United States had too
little gold to meet the demand, so in August 1971 President Richard
Nixon suspended convertibility of the dollar and defaulted on the
Bretton Woods agreement. The gold standard was abandoned, and
currencies soon floated against each other.
The leaders of the Federal Reserve System did not learn from this
experience. They still kept overprinting money, and the Consumer Price
Index (CPI) continued to climb. (An increase in the CPI is often,
misleadingly, called the inflation rate. Other things equal, inflating
the money supply sets off a general rise in prices.) In 1960 the CPI
increased less than 1 percent, but by 1970 it was up to over 5
percent. Predictably, political leaders addressed the effects, not
cause. In 1971 Nixon tried to outlaw the symptoms of inflation by
imposing wage and price controls. In 1974 his successor, Gerald Ford,
launched the WIN (Whip Inflation Now) campaign, but the “printing
presses” kept running and inflation kept increasing. In 1980, under
President Jimmy Carter, the “inflation rate” peaked at over 12
percent. While this was high by U.S. standards, other countries fared
far worse. In 1985 the annual increase in Bolivia reached 11,749
percent, in Argentina 672 percent, and in Brazil 227 percent.
As inflation raged, purchasing power evaporated. Nominal wage
increases and profits were illusory, wreaking havoc with people’s
ability to do economic calculations and pushing them into higher
income-tax brackets. When market participants came to expect
inflation, the economy slowed down and unemployment increased, even as
prices continued rising. This was the supposedly impossible
“stagflation” of the late 1970s.
Excessive monetary expansion resulted in another financial disaster in
the latter part of the twentieth century—the savings-and-loan fiasco.
This was the largest banking failure in U.S. history. Over 1,000 S&Ls
failed, resulting in the bankruptcy of the Federal Saving and Loan
Insurance Corporation (FSLIC) and a loss of over $150 billion, of
which over $120 billion was covered by the U.S. taxpayers. Many
culprits have been blamed for this banking disaster, from junk bonds
to greedy and corrupt banking officials, but the primary reason for
the widespread failures in the industry was irresponsible monetary
policy at the Fed.
By law S&Ls could only invest in 30-year fixed-rate home mortgages.
The main source of their funds were savings accounts from which
depositors could withdraw their money anytime. This led to what is
known as a duration mismatch. During the low-inflation periods of the
1950s and 1960s, the S&Ls made many 30-year fixed-rate loans that were
still outstanding in the 1970s and 1980s. But when prices were rising
12 percent a year, while an S&L’s mortgages yielded only 6 percent and
its savings accounts paid only 5.5 percent, something had to give.
The reason S&L accounts paid only 5.5 percent was the government’s
Regulation Q, which imposed interest ceilings on savers. Meanwhile
U.S. Treasury bills were yielding over 8 percent in 1970 and over 15
percent around 1980. But to buy a T-bill an investor needed $10,000.
Thus rich people could shift their money from savings accounts to T-
bills, but those without $10,000 were stuck. Fortunately, the money-
market mutual fund emerged in 1971. This type of account allowed
people to pool their money with other small investors and together
purchase T-bills and commercial paper. Now small investors could also
purchase high-yield short-term investments.
With small depositors moving their money from savings accounts to
money-market funds, a process called disintermediation, the S&Ls were
in trouble. They responded by offering toasters and other giveaways to
attract customers, but that did not work, and massive bankruptcies
ensued. In 1982 the Garn-St. Germain Act legalized money-market mutual
funds without an interest-rate ceiling.
Volcker to the Fed
In 1979 Jimmy Carter appointed Paul Volcker to head the Federal
Reserve. Volcker did what he said he was going to do. He drastically
increased short-term interest rates to control inflation. In 1980
short-term interest rates were much higher than long-term interest
rates. Since S&Ls made their money by borrowing short and lending long
this contributed further to their destruction. By 1982 the “inflation
rate” had dropped to around 4.25 percent, but many S&Ls were bankrupt,
with total losses of over $100 billion. In a desperate effort to fix
the problem federal regulators allowed the S&Ls to engage in
questionable accounting practices and more risky (and hopefully) more
profitable investments. It did not work, and by the end of the 1980s
S&L losses had risen to over $150 billion. Since the liabilities
(depositors’ money) were federally insured, the taxpayers had to pick
up much of the tab. Finally, in the late 1980s and early 1990s the
insolvent S&Ls were shut down. With the price index back to normal,
stability gradually returned to the financial markets. In 1982 one of
the greatest bull markets in U.S. history began and continued until
2000. While inflation measured by the consumer price index has been
relatively mild in the United States since the early 1980s, the money
supply has continued to expand at a rapid rate. It is likely that this
expansion contributed to the stock-market bubble of the late 1990s and
the housing boom of the early 2000s.
Not everyone benefited from the restabilization of the financial
markets. In societies where the currency is unstable people naturally
turn to other stores of wealth. Gold and real estate are common
investments, and the United States during the 1970s was no exception.
While the stock market floundered during the 1970s, the real-estate
market boomed. The increase in real-estate values included not only
houses but also farmland. When the price of farmland increased,
farmers in the Midwest were able to take out larger bank loans and
drastically expand their operations. But when real-estate prices
declined, the leverage ratios on farms drastically increased. During
the 1980s many farm bankruptcies followed.
The Great Depression of the 1930s, the S&L fiasco of the 1980s and
1990s, and the farming crisis of the 1980s are all testaments to the
extremely destructive effects that an irresponsible monetary policy
can have on a society. Policymakers allegedly created the Fed to
prevent these sorts of problems. Official goals of the system include
maximizing employment, stabilizing prices, and maintaining moderate
long-term interest rates. (See
www.federalreserve.gov/pf/pdf/pf_2.pdf.)
It is interesting to note, however, that the only extended periods
when the U.S. economy did not experience significant inflation
occurred when the United States had no central banking system.