Tag Archives: L. Randall Wray

Obama Finds His Spine and Attacks Fat Cat Bankers

By L. Randall Wray

President Obama has finally castigated the fat cat bankers that caused the crisis that forced millions of Americans from their jobs and their homes. Let us hope this represents a much awaited reversal of his here-to-for prostration before Rubin’s Wall Street buddies. (See Matt Taibbi’s Obama’s Big Sellout)

Trends always begin with a first step. Now he must prove that he really means it by firing Timmy (why haven’t you resigned yet?) Geithner and Larry Summers for crimes against the economy. We might even hope for a more deserved outcome:

The next step will be to send the FDIC into the largest, “systemically dangerous”, financial institutions to shut them down. Fire all the top management plus any traders who have earned six figure bonuses in any of the past three years. Replace them with lowly paid middle management. We will need a new Jessie Jones (President Roosevelt’s selection to head the Reconstruction Finance Corporation that took over half of the nation’s banks in the Great Depression, successfully resolving most of them); my colleague, Bill Black (who helped to resolve the thrift crisis, standing up to Charles Keating, John McCain and the rest of the Keating 5), is the leading candidate. His task will be to downsize the financial sector, starting with the top two dozen or so banks—which will be broken into small pieces. Their former management and traders will be investigated for fraud, which will be found with a probability approaching certainty in all cases.

We will also need to round-up all the Goldman Sachs alum now working in government– shoveling favors to their former employer—for special treatment. Part of the President’s remaining stimulus funds can be devoted to building new prisons for them and the thousands of other financial market predators. The new penitentiaries ought to be sited in places like Modesto, California that are suffering the most from the real estate collapse, making use of foreclosed properties and providing jobs as prison guards to those who have been displaced. At their hands, poetic as well as Biblical justice could be visited on Wall Street’s finest.

The notion that these “highly skilled” whiz kids need huge bonuses to retain them is, as Keynes would remark “crazily improbable–the sort of thing which no man could believe who had not had his head fuddled with nonsense for years and years.” Your average Brooklyn plumber has more financial markets sense than all of these clowns combined. Their efforts were directed to another goal—running a kleptocracy that would make a Russian mobster blush. There is no evidence that they have learned anything from this debacle and unless they are removed, they will dig the financial black hole even deeper.

This is the real change we have been waiting for since Obama took office. The Obama elected by the American people has been AWOL since last November, with an evil twin occupying the White House and repaying Wall Street for its campaign contributions. However, America is still a democracy—one citizen, one vote—and Obama has got to realize that while the fat cats might have contributed most of the money, they did not provide many votes. It is time to boot the imposter and downsize Wall Street’s influence on Washington.

The electorate voted for change. I have the audacity of hope to believe that the real Obama is the one we saw this week. Send the fake back to Goldman’s.

Memo to Congress: Don’t Increase the Government’s Debt Limit!

By L. Randall Wray

In a piece written for CNN, Senator Evan Bayh rails against the growing federal government budget deficit. He warns that next month the Treasury will ask Congress to raise the debt limit from its current $12.1 trillion, and promises that he will vote “no”. It is time, he argues, for Congress to stand up for our nation’s future by creating a bi-partisan debt commission that would finally put an end to “unsustainable” deficit spending.

The Senator goes on:

When President George W. Bush took office in 2001, our public debt amounted to 33 percent of our economy. Today, it is 60 percent of our gross domestic product. If we do nothing, our debt is projected to swell to over 70 percent by 2019. To put those numbers in perspective: If you divided the debt equally among all Americans, every man, woman and child living in the United States today would owe more than $39,000.

I presume the Senator has got his math correct, but there is a glaring error in his English that can be corrected by substituting an “n” for an “e”: If you divided the debt equally among all Americans, every man, woman and child living in the United States today would own more than $39,000. Government debt is a private asset. You and I do not owe government debt, we own it. Indeed, the only source of net dollar-denominated financial wealth is federal government debt.

The good Senator continues, comparing his proposed debt commission with an earlier successful bi-partisan effort:

There is precedent to create this type of commission with real teeth. President Ronald Reagan created a commission, chaired by Alan Greenspan, to shore up Social Security in the early 1980s.

That commission hiked payroll taxes to transform Social Security from a “pay-as-you-go” system (payroll taxes collected were matched to current year spending) to an “advanced funded” system that accumulated “Trust Fund assets”. In truth the Trust Fund is nothing but an accounting gimmick in which one arm of government (the Treasury) owes another arm of government (Social Security), with workers and their firms saddled with payroll taxes that are a third larger than Social Security spending. Like almost everything else Alan Greenspan did, the Social Security commission was a monumental failure and its actions were completely unnecessary. All Social Security payments can be made as they come due whether the Trust Fund holds Treasury debt or not, and no matter how much “revenue” the payroll tax collects. Like the bowling alley that credits points when pins are knocked down, the Treasury cannot run out of “points” credited to the accounts of pensioners.

The anti-deficit mania in Washington is getting crazier by the day. So here is what I propose: let’s support Senator Bayh’s proposal to “just say no” to raising the debt ceiling. Once the federal debt reaches $12.1 trillion, the Treasury would be prohibited from selling any more bonds. Treasury would continue to spend by crediting bank accounts of recipients, and reserve accounts of their banks. Banks would offer excess reserves in overnight markets, but would find no takers—hence would have to be content holding reserves and earning whatever rate the Fed wants to pay. But as Chairman Bernanke told Congress, this is no problem because the Fed spends simply by crediting bank accounts.

This would allow Senator Bayh and other deficit warriors to stop worrying about Treasury debt and move on to something important like the loss of millions of jobs.

When All Else Has Failed, Why Not Try Job Creation?

By L. Randall Wray

The US continues to hemorrhage jobs even as some purport to see “green shoots”. All plausible projections show that unemployment will rise even if our economy begins to grow. Personally, I think those green shoots will die this winter because the stimulus package is far too small and because the financial system is going to crash again. The longer we wait to actually address the unemployment problem, the worse are the prospects for a real recovery.

In his recent piece, Paul Krugman writes:

Just to be clear, I believe that a large enough conventional stimulus would do the trick. But since that doesn’t seem to be in the cards, we need to talk about cheaper alternatives that address the job problem directly. Should we introduce an employment tax credit, like the one proposed by the Economic Policy Institute? Should we introduce the German- style job-sharing subsidy proposed by the Center for Economic Policy Research? Both are worthy of consideration.

The point is that we need to start doing something more than, and different from, what we’re already doing. And the experience of other countries suggests that it’s time for a policy that explicitly and directly targets job creation.


As Krugman reports, Germany has avoided massive job losses by subsidizing firms that retain workers but reduce hours worked. The EPI’s proposal follows a similar strategy. This is fine so far as it goes—in a sense it allows workers, firms, and government to share the burden of reduced output and thus reduced work hours required. That is more equitable but in my view it is not a path toward recovery. While I do agree with Krugman that greater aggregate demand stimulus is required, there is no reason to believe that would provide a sufficient supply of jobs for all who want to work.

The final sentence in the Krugman post makes far more sense: let’s create MORE jobs, MORE work hours, and MORE payroll. A new, New Deal program with a permanent and universal job guarantee that will supply as many jobs as there are job seekers. Not only will this provide jobs in the New Deal style program, but it will also save jobs and increase work hours in the rest of the economy. Why go for second or third best when the best option is available?

Winston Churchill remarked “The Americans will always do the right thing………. after they`ve exhausted all the alternatives”. Direct job creation is the right way to put the economy onto a sustainable path to recovery.

For discussion and ideas on direct job creation and full employment, go here; here; here; and here.

The Time Has Come for Direct Job Creation

First Published on the New America Foundation’s blog.
According to an ILO report[15] issued before the global economic crisis hit, even though more people were working than ever before, the number of unemployed was also at an all time high of nearly 200 million. Further, “strong economic growth of the last half decade has only had a slight impact on the reduction of workers who live with their families in poverty…”, in part because the growth was fueling productivity growth (up 26% in the past decade) but was not creating many new jobs (up only 16.6%). The report concluded: “Every region has to face major labour market challenges” and that “young people have more difficulties in labour markets than adults; women do not get the same opportunities as men, the lack of decent work is still high; and the potential a population has to offer is not always used because of a lack of human capital development or a mismatch between the supply and the demand side in labour markets.” All of these statements applied equally well to the United States even at the peak of our business cycle in early 2008.

Now, of course, our labor market is in dire straits–having lost more than 6 million jobs, with official unemployment approaching 10%, and with millions more workers facing reduced hours and even reduced hourly pay. According to a New America Foundation report[16] released late last spring, if we add “marginally attached” workers, those forced to work part-time, and those who would like to work but have given up looking, the effective unemployment total is over 30 million. Add to that another 2 million incarcerated individuals–many of whom might have avoided a life of crime if they had enjoyed better economic opportunities, and it is likely that a more accurate measure of the unemployment rate would be about 20%.

These numbers are similar to those I obtained for the Clinton boom when I estimated how many potential workers remained jobless even when the economy was supposedly at full employment.[17] Labor force participation rates–the percent of working age population that is employed or unemployed–vary considerably by educational level; high school dropouts have very low participation rates, and correspondingly high incarceration rates. I calculated that as many as 26 million more people would be working if we brought labor force participation rates of all adults up to the levels enjoyed by college graduates. That number would be higher now because of lackluster job creation during the Bush years and due to the economic crisis. Thus, we can safely conclude that whether the US economy is booming or busting, it is chronically tens of millions of jobs short.

Comparing such numbers with President Obama’s promise that his policies will create, or at least preserve, three or four million jobs demonstrates that current policy is not up to the task of dealing with our labor market problems. To be sure, there is no single labor market policy that can deal with the scope of our problems. We certainly need to resolve the financial crisis and to restore economic growth. But as experience demonstrates, even relatively robust growth does not automatically create jobs.

We also have severe structural problems: some sectors, such as manufacturing, will create far too few jobs relative to the supply of workers with appropriate skills, while others, such as the FIRE sector–finance, insurance and real estate–likely should be downsized, and still others, such as nursing and trained childcare, face a chronic shortage. Finally, it could be argued that we face another kind of structural problem identified a half century ago by John Kenneth Galbraith: a relatively impoverished public sector and a bloated for-profit sector. Thus, while recognizing the multi-faceted nature of our problem, I believe that direct job creation by government would go a long way toward resolving a large part of–and probably the worst of–our unemployment problem even as it could put people to work to provide needed public sector services.

Direct job creation programs have been common in the US and around the world. Americans immediately think of the various New Deal programs such as the Works Progress Administration (which employed about 8 million), the Civilian Conservation Corps (2.75 million employed), and the National Youth Administration (over 2 million part-time jobs for students). Indeed, there have been calls for revival of jobs programs like VISTA and CETA to help provide employment of new high school and college graduates now facing unemployment due to the crisis.[18]

But what I am advocating is something both broader and permanent: a universal jobs program available through the thick and thin of the business cycle. The federal government would ensure a job offer to anyone ready and willing to work, at the established program compensation level, including wages and benefits package. To make matters simple, the program wage could be set at the current minimum wage level, and then adjusted periodically as the minimum wage is raised. The usual benefits would be provided, including vacation and sick leave, and contributions to Social Security.

Note that the program compensation package would set the minimum standard that other (private and public) employers would have to meet. In this way, public policy would effectively establish the basic wage and benefits permitted in our nation–with benefits enhanced as our capacity to provide them increases. I do not imagine that determining the level of compensation will be easy; however, a public debate that brings into the open matters concerning the minimum living standard our nation should provide to its workers is not only necessary but also would be healthy.

The federal government would not have to micromanage such a program. It would provide the funding for direct job creation, but most of the jobs could be created by state and local government and by not-for-profit organizations. There are several reasons for this, but the most important is that local communities have a better understanding of needs. The New Deal was more centralized, but many of the projects were designed to bring development to rural America: electrification, irrigation, and large construction projects. To be sure, we need infrastructure spending today, but much of that can be undertaken by state and local governments. This program would provide at least some of the labor for these projects, with wages and some materials costs paid by the federal government.

More importantly, today we face a severe shortage of public services that could be substantially relieved through employment at all levels of government plus not-for-profit community service providers. Examples include elder care and childcare, playground supervision, non-hazardous environmental clean-up and caring for public space, and low-tech improvement of energy efficiency of low-income residences. Decentralization promotes targeting of projects to meet community needs–both in terms of the kinds of programs created but also in terms of matching new jobs to the skills of unemployed people in those communities. Also note that by creating millions of decentralized public service jobs, we avoid one of the major criticisms of the stimulus package: because there were not enough “on the shelf” infrastructure-type projects, it is taking a long time to create jobs. Instead, we should allow every community service organization to add paid jobs so that they can quickly expand current operations.

As the economy begins to recover, the private sector (as well as the public sector) will begin to hire again; this will draw workers out of the program. That is a good thing; indeed, one of the major purposes of this program is to keep people working so that a pool of employable labor will be available when a downturn comes to an end. Further, the program should do what it can to upgrade the skills and training of participants, and it will provide a work history for each participant to use to obtain better and higher paying work. Experience and on-the-job training is especially important for those who tend to be left behind no matter how well the economy is doing. The program can provide an alternative path to employment for those who do not go to college and cannot get into private sector apprenticeship programs.

There are some recent real world examples of programs that are similar to the one I am proposing. When Argentina faced a severe financial, economic, and social crisis early this decade, it created the “Jefes” program in which the federal government provided funding for labor and a portion of materials costs for highly decentralized projects, most of which created community service jobs.[19] The program was targeted to poor families with children, allowing each to choose one “head of household” to participate in paid work. The program was up-and-running in a matter of four months, creating jobs for 14% of the labor force–a remarkable achievement. More recently, India has enacted the National Rural Employment Guarantee, which ensures 100 days of paid work to rural adults. While the program is limited, it does make an advance over the Jefes program: access to a job becomes a recognized human right, with the government held responsible for ensuring that right.

Indeed, the United Nations Universal Declaration of Human Rights includes the right to work, not only because it is important in its own right, but also because many of the other economic and social entitlements proclaimed to be human rights cannot be secured without paying jobs. And both history and theory strongly indicate that the only way to secure a right to work is through direct job creation by government. This is not, and should not be, a responsibility of the private sector, which employs workers only on the expectation of selling output at a profit. Even if we could somehow manage economic policy to produce a permanent state of boom, we know that will still leave tens of millions of potential workers unemployed or in part-time and underpaid work. Hence, a direct government job creation program is a necessary component of any strategy of ensuring achievement of many of the internationally recognized human rights.

[15] Global Employment Trends Brief 2007, International Labour Office; results summarized in “Global Unemployment Remains at Historic High Despite Strong Economic Growth”, ILO 25 January 2007, Geneva. See also The Employer of Last Resort Programme: Could it work for developing countries?, L. Randall Wray, Economic and Labour Market Papers, International Labour Office, Geneva, August 2007, No. 2007/5.

[16] Not Out of the Woods: A Report on the Jobless Recovery Underway, New American Contract, New America Foundation, 2009, www.newamericancontract.net.

[17] Can a Rising Tide Raise All Boats? Evidence from the Kennedy-Johnson and Clinton-era expansions, L. Randall Wray, in Jonathan M. Harris and Neva R. Goodwin (editors), New Thinking in Macroeconomics: Social, Institutional and Environmental Perspectives, Northampton, Mass: Edward Elgar, pp. 150-181.

[18] See Not Out of the Woods, referenced above.

[19] See Gender and the job guarantee: The impact of Argentina’s Jefes program on female heads of households, Pavlina Tcherneva and L. Randall Wray, CFEPS Working Paper No. 50, 2005.

Healthcare Diversions Part 3: The Financialization of Health and Everything Else in the Universe

By L. Randall Wray

In the previous two blogs I have argued that extending healthcare insurance is neither desirable nor will it reduce healthcare costs. Indeed, healthcare insurance is a particularly bad way to provide funding of the provision of healthcare services. In this blog I argue that extension of healthcare insurance represents yet another unwelcome intrusion of finance into every part of our economy and our lives. In other words, the “reforms” envisioned would simply complete the financialization of healthcare that is already sucking money and resources into the same black hole that swallowed residential real estate. It is no coincidence that Senator Baucus, the Chair of the Senate Finance Committee, has been chosen to head the push on healthcare—not, say, someone who actually knows something about healthcare. The choice was obvious, similar to the choice of Goldman Sach’s flunky, Timmy Geithner to head the Treasury. (In truth, many of President Obama’s appointees have no more expertise in their assigned missions than did President Bush’s “heckuvajob” Brown chosen to oversee the response to Hurricane Katrina. The difference here is not really incompetence but rather inappropriate competence—as in foxes and henhouses.) When it comes to Washington, “Wall Street R Us”.

I have previously written about the financialization of houses and commodities (go to www.levy.org) and the plan to financialize death (earlier on this blog). In all of these cases, Wall Street packages assets (home mortgages, commodities futures, and life insurance policies) so that gamblers can speculate on outcomes. If you lose your home through a mortgage delinquency, if food prices rise high enough to cause starvation, or if you die an untimely death, Wall Streeters make out like bandits. Health insurance works out a bit differently: they sell you insurance and then the insurer denies your claim due to pre-existing conditions or simply because denial is more profitable and you probably don’t have sufficient funding to fight your way through the courts. You then go bankrupt (according to Steffie Woolhandler, two-thirds of US bankruptcies are due to healthcare bills) and Wall Street takes your assets and garnishes your wages.

Here’s the opportunity, Wall Street’s newest and bestest gamble: there is a huge untapped market of some 50 million people who are not paying insurance premiums—and the number grows every year because employers drop coverage and people can’t afford premiums. Solution? Health insurance “reform” that requires everyone to turn over their pay to Wall Street. Can’t afford the premiums? That is OK—Uncle Sam will kick in a few hundred billion to help out the insurers. Of course, do not expect more health care or better health outcomes because that has nothing to do with “reform”. “Heckuvajob” Baucus is more concerned about Wall Street’s insurers, who see a missed opportunity. They’ll collect the extra premiums and deny the claims. This is just another bailout of the financial system, because the tens of trillions of dollars already committed are not nearly enough.

You might wonder about the connection between insurance and Wall Street finance. They are two peas in a pod. Indeed, we threw out the Glass-Steagall Act that separated commercial banking from investment banking and insurance with the Gramm-Leach-Bliley Act of 1999 (note how easily that rolls off the tongue—sort of like a mixture of wool and superglue) that let Wall Street form Bank Holding Companies that integrate the full range of “financial services” such as loans and deposits, that sell toxic waste mortgage securities to your pension funds, that create commodity futures indexes for university endowments to drive up the price of your petrol, and that take bets on the deaths of firms, countries, and your loved ones.

Student loans, credit card debt, and auto leases? Financialized—packaged and sold to gamblers making bets on default. Even the weather can be financialized. You think I jest? The World Food Programme proposed to issue “catastrophe bonds” linked to low rainfall. The WFP would pay principal and interest when rainfall was sufficient; if there was no rainfall, the WFP would cease making payments on the bonds and would instead fund relief efforts. (Satyajit Das, Traders, Guns & Money, p. 32). As are earthquakes—Tokyo Disneyland issued bonds that did not have to be repaid in the event of an earthquake. (ibid) It is rumored that Wall Street will even take bets on assassination of world leaders (perhaps explaining the presence of armed protestors at President Obama’s speeches). Why not? Someone even set up a charitable trust called the “Sisters of Perpetual Ecstasy” as a special purpose vehicle to move risky assets off the books of its mother superior bank, to escape what passed for regulation in recent years. (Das, again) I once facetiously recommended the creation of a market in Martian ocean front condo futures to satisfy the cravings of Wall Street for new frontiers in risk. Obviously, I set my sights too low. The next bubble will probably be in carbon trading—financialization of pollution!—this time truly toxic waste will be packaged and sold off to global savers. According to Das (p. 320), traders talk about new frontiers “trading in rights to clean air, water and access to fishing grounds; basics of human life that I had always taken for granted”.

Is there an alternative? Frankly, I do not know. Leaving aside the political problems—once Wall Street has got its greedy hands on some aspect of our lives it is very difficult to wrest control from its grasp—health care is a very complex issue. It is clear (to me) that provision of routine care should not be left to insurance companies. Marshall Auerback believes that unforeseen and major expenses due to accidents might be insurable costs. I am sympathetic. Perhaps “single payer” (that is, the federal government) should provide basic coverage for all of life’s normal healthcare needs, with individuals purchasing additional coverage for accidents. Basic coverage can be de-insured—births, routine exams and screening, inoculations, hospice and elder care. On the other hand, a significant portion of healthcare expenses is due to chronic problems, some of which can be traced to birth. I have already argued that these are not really insurable—they are the existing conditions that insurers must exclude. Others can be traced to lifestyle “choices”. Some employers are already charging higher premiums to employees whose body mass index exceeds a chosen limit—with rebates provided to those who manage to lose weight. While I am skeptical that a monetary incentive will be effective in changing behavior that is certainly quite complex, this approach is probably better than excluding individuals from insurance simply because of their BMI.

Some have called for extending a Medicare-like program to all. Although sometimes called insurance, Medicare is not really an insurance program. Rather it pays for qualifying health care of qualified individuals. It is essentially a universal payer, paygo system. Its revenues come from taxes and “premiums” paid by covered individuals for a portion of the program. I will not go into the details, but “paygo” means it is not really advance funded. While many believe that its Trust Fund could be strengthened through higher taxes now so that more benefits could be paid later as America ages, actually, Medicare spending today is covered by today’s government spending—and tomorrow’s Medicare spending will be covered by tomorrow’s government spending. At the national level, it is not possible to transport today’s tax revenue to tomorrow to “pay for” future Medicare spending.

I realize this is a difficult concept. In real terms, however, it is simpler to understand: Medicare is paygo because the health care services are provided today, to today’s seniors; there is no way to stockpile medical services for future use (ok, yes, some medical machinery and hospitals can be built now to be used later). And the true purpose of taxes and premiums paid today is to reduce net personal income so that resources can be diverted to the health care sector. Many believe we already have too many resources directed to that sector. Hence, the solution cannot be to raise taxes or premiums today in order to build a bigger trust fund to reduce burdens tomorrow. If we find that 25 years from now we need more resources in the health care sector, the best way to do that will be to spend more on health care at that time, and to tax incomes at that time to reduce consumption in other areas so that resources can be shifted to health care at that time.

Our problem today is that we need to allocate more health care services to the currently underserved, which is comprised of two different sets of people: folks with no health insurance, and those with health insurance that is too limited in coverage to provide the care they need. A general proposed solution is to provide a subsidy to get private insurers to expand coverage. (According to Taibi, the current House Bill subsidies are projected to reach $773 billion by 2019.) If we take my example pursued in an earlier blog of a person with diabetes who is excluded because of the existing condition, the marginal subsidy required would have to equal the expected cost of care, plus a risk premium in case that estimate turns out to be too low, plus the costs of running the insurance business, plus normal profits. If on the other hand diabetes care were directly covered by a federal government payment to health care providers, the risk premium, insurance business costs, and profits on the insurance business would not be necessary. In other words, using the insurance system to pay for added costs of providing care to people with diabetes adds several layers of costs. That just makes no sense.

It will be clear by now that I really do not have any magic bullet. We face three serious and complex issues that can be separately analyzed. First, we need a system that provides health care services. Our current healthcare system does a tolerably good job for most people, although a large portion of the population does not receive adequate preventative and routine care, thus, is forced to rely on expensive emergency treatment. The solution to that is fairly obvious and easy to implement—if we leave payment to the side. As discussed in my first blog we must also recognize that a big part of America’s health expenses are due to chronic and avoidable conditions that result from the corporatization of food—a more difficult problem to resolve.

Second, our system might, on the other hand, provide in the aggregate too many resources toward the provision of healthcare (leaving other needs of our population unmet). Rational discussion and then rational allocation can deal with that. We don’t need “death panels” (which we already have—run by the insurance companies), but we do need rational allocation. I expect that healthcare professionals can do a far better job than Wall Street will ever do in deciding how much care and what type of care should be provided. Individuals who would like more care than professionals decide to be in the public interest can always pay out of pocket, or can purchase private insurance. Maybe the cost of botox treatments is an insurable expense? Obviously, what is deemed to be necessary healthcare will evolve over time—it, like human rights is “aspirational”—and some day might include nose jobs and tummy tucks for everyone.

Third, we need a way to pay for healthcare services. For routine healthcare and for pre-existing conditions it seems to me that the only logical conclusion is that the best risk pool is the population as a whole. It is in the public interest to see that the entire population receives routine care. It is also in the public interest to see that our little bundles of pre-existing conditions (otherwise known as infants) get the care they need. I cannot see any obvious advantage to involving private insurance in the payment system for this kind of care. If we decided to have more than one insurer, we would have to be sure that each had the same risks, hence, the same sort of insured pool. It is conceivable that competition among private insurers could drive down premiums, but it is more likely that competition would instead take the form of excluding as many claims as possible. We’d thus get high premiums and lots of exclusions—exactly what we’ve got now.

We could instead have a single national private insurer pursuing the normal monopoly pricing and poor service strategy (remember those good old days when you could choose from among one single telephone service provider?), but in that case we would have to regulate the premiums as well as the rejection of claims. Regulation of premiums cannot be undertaken without regulating the health care costs that the insurer(s) would have to cover. If we are going to go to all the trouble of regulating premiums, claim rejections, and healthcare prices we might as well go whole-hog and have the federal government pay the costs. Difficult and contentious, yes. Impossible? No—we can look to our fellow developed nations for examples, and to our own Medicare system.

Finally, there may still be a role for private insurers, albeit a substantially downsized one. Private insurance can be reserved for accidents, with individuals grouped according to similar risks: hang-gliders, smokers, and texting drivers can all be sorted into risk classes for insurance purposes. If it is any consolation to the downsized insurers, we also need to downsize the role played by the whole financial sector. Finance won’t like that because it has become accustomed to its outsized role. In recent years it has been taking 40% of corporate profits. It takes most of its share off the top—fees and premiums that it receives before anyone else gets paid. Rather than playing an auxiliary role, helping to ensure that goods and services get produced and distributed to those who need them, Wall Street has come to see its role as primary, with all aspects of our economy run by the Masters of the Universe. As John Kenneth Galbraith’s The Great Crash shows, that was exactly the situation our country faced in the late 1920s. It took the Great Depression to put Wall Street back into its proper place. The question is whether we can get it into the backseat without another great depression.

Health Insurance Diversions, Part 2: We Need Less Health Insurance, Not More

by L. Randall Wray

In my last blog, I argued that the benefits of extending health insurance coverage are probably overstated and are not likely to reduce health care costs or improve health outcomes. In this blog I will argue that we do not need more health insurance, rather, we need less.

Here is the point. Healthcare is not a service that should be funded by insurance companies. An individual should ensure against expensive and undesirable calamities: tornadoes, fires, auto accidents. These need to be insurable risks, or insurance will not be made available. This means the events need to be reasonably random and relatively rare, with calculable probabilities that do not change much over time. As discussed in a previous blog, we need to make sure that the existence of insurance does not increase the probability of insured losses: (http://neweconomicperspectives.blogspot.com/2009/09/selling-death-wall-streets-newest.html) This is why we do not let you insure your neighbor’s house. Insurance works by using the premiums paid in by all of the insured to cover the losses that infrequently visit a small subset of them. Of course, insurance always turns out to be a bad deal for almost all of the insured—the return is hugely negative because most of the insured never collect benefits, and the insurance company has to cover all costs and earn profits on its business. Its operating costs and profits are more or less equal to the net losses suffered by its policy holders.


Ideally, insurance premiums ought to be linked to individual risks; if this actually changes behavior so that risk falls, so much the better. That reduces the costs to the other policy holders who do not experience insured events, and it also increases profitability of the insurance companies. Competition among insurers will then reduce the premiums for those whose behavior modifications have reduced risks.

In practice, people are put into classes—say, over age 55 with no accidents or moving violations in the case of auto insurance. Some people are uninsurable—risks are too high. For example, one who repeatedly wrecks cars while driving drunk will not be able to purchase insurance. The government might help out by taking away the driver’s license, in which case the insurer could not sell insurance even if it were willing to take on the risks. Further, one cannot insure a burning house against fire because it is, well, already afire. And even if insurance had already been purchased, the insurer can deny a claim if it determines that the policyholder was at fault.
The insured try to get into the low risk, low premium classes; the insurers try to sort people by risk and try to narrow risk classes. To be sure, insurers do not want to avoid all risks—given a risk/return trade-off, higher risk individuals will be charged higher premiums. Problems for the insurer arise if high risk individuals are placed in low risk classes, thus, enjoy inappropriately low premiums. The problem for many individuals is that appropriately priced premiums will be unaffordable. At the extreme, if the probability of an insurable event approaches certainty, the premium that must be charged equals the expected loss plus insurance company operating costs and profits. However, it is likely that high risk individuals would refuse insurance long before premiums reached that level.

Of course insured risks change over time—which means premiums charged might not cover the new risks. Cars become safer. More people wear seat belts. Fire resistant materials become standard and fire fighting technologies improve. Global warming produces more frequent and perhaps more severe hurricanes and tornadoes. But these changes are generally sufficiently slow that premiums and underwriting standards can be adjusted. Obviously, big and abrupt changes to risks would make it difficult to properly price premiums.

In any event, once insurance is written, the insurer does its best to deny claims. It will look at the fine print, try to find exclusions, and uncover pre-existing conditions (say, faulty wiring) that invalidate the claim. All of that is good business practice. Regulators are needed to protect the insured from overly aggressive denials of claims, a responsibility mostly of state government.
Let us examine the goal of universal health insurance from this perspective. It should now be obvious that using health “insurance” as the primary payment mechanism for health care is terribly inappropriate.

From the day of our births, each of us is a little bundle of pre-existing conditions—congenital abnormalities and genetic predispositions to disease or perhaps to risky behavior. Many of these conditions will only be discovered much later, probably in a doctor’s office. The health insurer will likely remain in the dark until a bill is submitted for payment. It then must seek a way to deny the claim. The insurer will check the fine print and patient records for exclusions and pre-existing conditions. Often, insurers automatically issue a denial, forcing patients to file an appeal. This burdens the insured and their care-givers with mountains of paperwork. Again, that is just good business practice—exactly what one would expect from an insurer.

And, again, it would be best to match individual premiums to risk, but usually people are placed into groups, often (for historical reasons) into employee groups. Insurers prefer youngish, urban, well educated, professionals—those jogging yuppies with good habits and enough income to join expensive gyms with personal trainers. Naturally, the insurer wants to charge premiums higher than what the risks would justify, and to exclude from coverage the most expensive procedures.
Many individuals are not really insurable, due to pre-existing conditions or risky behavior. However, many of these will be covered by negotiated group insurance due to their employment status. The idea is that the risks are spread and the healthier members of the group will subsidize the least healthy. This allows the insurer to escape the abnormally high risks of insuring high risk individuals. It is, of course, a bum deal for the healthy employees and their employers.

This is not the place for a detailed examination of the wisdom of tying health insurance to one’s employer. It is very difficult to believe that any justification can be made for it, so no one tries to justify it as far as I can tell. It is simply accepted as a horrible historical accident. It adds to the marginal cost of producing output since employers usually pick up a share of the premiums. It depresses the number of employees while forcing more overtime work (since health care costs are fixed per employee, not based on hours worked) as well as more part-time work (since insurance coverage usually requires a minimum number of hours worked). And it burdens “legacy firms” that offer life-time work as well as healthcare for retirees. Finally, and fairly obviously, it leaves huge segments of the population uncovered because they are not employed, because they are self-employed, or because they work in small firms. In short, one probably could not design a worse way of grouping individuals for the purposes of insurance provision. Would anyone reasonably propose that the primary means of delivering drivers to auto insurers would be through their employers? Or that auto insurance premiums ought to be set by the insurable loss experience of one’s co-workers? That is too ridiculous to contemplate—and so we do not–but it is what we do with health insurance.

Extending coverage to a diabetic against the risk of coming down with diabetes is like insuring a burning house. An individual with diabetes does not need insurance—he needs quality health care and good advice that is followed in order to increase the quality of life while reducing health care costs. Accompanying this health care with an insurance premium is not likely to have much effect on the health care outcome because it won’t change behavior beyond what could be accomplished through effective counseling. Indeed, charging higher premiums to those with diabetes is only likely to postpone diagnosis among those whose condition has not yet been identified. Getting people with diabetes into an insured pool increases costs for the other members of the pool. Both the insurer as well as the other insured members have an interest in keeping high risk individuals out of the pool. Experience shows that health care costs follow an 80/20 pattern: 80% of health care costs are incurred due to treatment of 20% of patients. (Steffie Woolhandler http://www.prospect.org/cs/articles?article=more_than_a_prayer_for_single_payer) If only a fraction of those high costs individuals can be excluded, costs to the insurer can be cut dramatically.

We have nearly 50 million individuals without health insurance, and the number grows every day. Most health “reform” proposals would somehow insure many or most of these people—mostly by forcing them to buy insurance. All of them have pre-existing conditions, many of which are precisely the type that if known would make them uninsurable if insurance companies could exclude them. While it is likely that only a fraction of the currently uninsured have been explicitly excluded from insurance because of existing conditions (many more are excluded because they cannot afford premiums)—but every one of them has numerous existing conditions and one of the main goals of “reform” is to make it more difficult for insurers to exclude people with existing conditions. In other words, “reform” will require people who do not want to buy insurance to buy it, and will require insurers who do not want to extend insurance to them to provide it. That is not a happy situation even in the best of circumstances.

So here is what the outcome will look like. Individuals will be forced to buy insurance against their will, often with premiums set unaffordably high. Government will provide a subsidy so that insurance can be provided. Insurance companies will impose high co-payments as well as deductibles that the insured cannot possibly afford. In this way, they will minimize claims and routine use of health care services by the nominally insured. When disaster strikes—putting a poorly covered individual into that 20/80 high cost class of patients–the insurer will find a way to dismiss claims. The “insured” individual will then be faced with bankrupting uncovered costs.
That is not far fetched. Currently, two-thirds of household bankruptcies are due to health care costs. Surprisingly, most of those who are forced into bankruptcy had health insurance—but lost it after treatment began, or simply could not afford the out-of-pocket expenses that the insurer refused to cover. As Woolhandler says, in 2007 an individual in her 50s would pay an insurance premium of $4200 per year, with a $2000 deductible. Many of those currently without insurance would not be able to pay the deductible, meaning that the health insurance would not provide any coverage for routine care. Only an emergency or development of a chronic condition would drive such a patient into the health care system; with exclusions and limitations on coverage, the patient could find that even after meetinging the $2000 deductible plus extra spending on co-payments, bankruptcy would be the only way to deal with all of the uncovered expenses. Of course, that leaves care providers with the bill—which is more-or-less what happens now without the universal insurance mandate.

In truth, insurance is a particularly bad way to provide payments for health care. Insurance is best suited to covering unexpected losses that result from acts of god, accidents, and other unavoidable calamities. But except in the case of teenagers and young adult males, accidents are not a major source of health care costs. In other words the costs to the insurer are not the equivalent of a tornado that randomly sets its sights on a trailer park. Rather, chronic illnesses, sometimes severe, and often those that lead to death, are more important. Selling insurance to a patient with a chronic and ultimately fatal illness would be like selling home insurance on a house that is slowly but certainly sliding down a cliff into the sea. Neither of these is really an insurable risk—rather each represents a certain cost with an actuarially sound premium that must exceed the loss (to cover operating costs and profits for the insurer). So if the policy were properly priced, no one would have an economic incentive to purchase it.

Another significant health care cost results from provision of what could be seen as public health services—vaccinations, mother and infant care, and so on. And a large part of that has nothing to do with calamity but rather with normal life processes: pregnancy, birth, well child care, school physicals, and certification of death at the other end of life. Treating a pregnancy as an insurable loss seems silly—even if it is unplanned. It does not make much sense to finance the health care costs associated with pregnancy and birth in the same way that we finance the costs of repairing an auto after a wreck—that is, through an insurance claim. Many of these expenditures have public goods aspects; while there are private benefits, if the health care cannot be covered through private insurance or out-of-pocket the consequences can lead to huge public sector costs. For this reason, it does not make sense to try to fund all private benefits of such care by charges to the individuals who may—or may not—be able and willing to pay for them. Nor does it make sense to raise premiums on one’s co-workers to cover expected pregnancies as young women join a firm.

Health care is not similar to protecting a homeowner against losses due to natural disasters. The risks to the health insurer are greatly affected by the behavior of the covered individuals, as well as by social policy. Discovering cures and new treatments can greatly increase, or reduce, costs. To a large extent that is outside the control of the insurer or the insured—if a new treatment becomes standard care, there will be pressures on insurers to cover it. Death might be the most cost effective way to deal with heart attacks, but standard practice does not present that as a standard treatment—nor would public policy want it to do so. In other words, social policy dictates to a large degree the losses that insurers must cover; acts of congress are not equivalent in their origins to acts of god—although their impacts on insurers are similar.

We currently pay most health care expenses through health insurance. But people need health care services on a routine basis—and not simply for unexpected calamities. We have become so accustomed to health insurance that we cannot understand how absurd it is to finance health care services in this manner. Our automobiles need routine maintenance, including oil changes. Imagine if we expected our auto insurer to cover such expected costs. We are, of course, all familiar with various “extended warranty” plans sold on practically all consumer items—from toasters to flat screen TVs. But we recognize that these are little more than scams—a way to increase the purchase price so that the retailer gets more revenue. We tolerate the scams because we can “just say no”—caveat emptor and all that. But health care “reform” proposes to force us to turn over a larger portion of our income to insurance companies—who will then do their best to ensure that any health care services we need will not be covered by the plan we are forced to buy. Unlike a broken toaster that can just be thrown out when the warranty fails to cover repairs, we do not, and do not want to, throw out people whose insurance coverage proves to be inadequate.

It is worthwhile to step back to look at the costs of providing health care payments through insurers. According to Woolhandler, 20 cents of every health care dollar goes to insurance companies. Another 11 cents goes to administrative overhead and profit of the health care providers. Much of that is due to all the paperwork required to try to get the insurance companies to pay claims (there are 1300 private insurers, with nearly as many different forms that health care providers must fill out to file claims); it is estimated that $350 billion a year could be saved on paperwork if the US adopted a single payer system. (Matt Taibi, “Sick and Wrong”, Rolling Stone, September 3, 2009). Hence, it is plausible that a full quarter of all health care spending in the US results from the peculiar way that we finance our health care system—relying on insurance companies for a fundamentally uninsurable service. Getting insurance companies out of the loop would almost certainly “pay for” provision of health care services to all of those who currently have inadequate access—including the under-insured.

In sum, using insurers to provide funding is a complex, costly, and distorting method of financing health care. Imagine sending your weekly grocery bill to an insurance clerk for review, and having the grocer reimbursed by the insurer to whom you have been paying “food insurance” premiums—with some of your purchases excluded from coverage at the whim of the insurer. Is there any plausible reason for putting an insurance agent between you and your grocer? Why do we put an insurer between you and your health care provider?
Next time: How to build a better mousetrap.

Healthcare Diversions, Part 1: The Elephant in the Room

By L. Randall Wray

I have tried to stay clear of the current healthcare debate because all sides appear to be so far from any sensible policy that I remain indifferent to the outcome. However a recent three-hour layover in a major airport in the “new South” brought into sharp focus—at least for me—the elephant in the room that no one wants to discuss.

I won’t belabor the obvious point that Americans are, to put it delicately, a bit on the hefty side. But what really struck me is that many have evolved to the point that they are barely bipedal. As passengers attempted to perambulate their way from one gate to the next, it appeared that most had forgotten how to walk. I saw the most ungainly gaits—peregrination with great effort but little forward progress, the zigzag, the toe-heel-toe, the Quasimodo, the sideways roll, the zombie, the stop-start-stop again, all whilst occasionally careening off walls and other passengers. Of course, in some cases the attire dictated the method. A few of the boys had their pants waist bands down around their knees, forcing a duck-like waddle, made even more difficult by the need to use one hand to hold onto the belt lest gravity complete its work and bring the pants down to the ankles. A lot of the girls wore PJs and flip-flops, making it impossible to do much more than shuffle along. Some forty-somethings had eight inch stilettos in which only flamingos could parade about with grace. Still, the vast majority of passengers wore shoes marketed as sporting equipment, designed presumably for activities involving two legs and an upright posture rather than for those of the aquatic or slithering or knuckle-dragging variety.

And here is an interesting factoid: the average American walks just the length of three football fields daily. (Sierra Magazine, Jan/Feb 2007, p. 25) It shows. Since that is the average, it is no doubt boosted by the still considerable number of aging yuppies who manage 3k runs before breakfast, as well as by children the soccer moms idolized by Sarah Palin drive to practice. I presume that most of the airport patrons typically manage little more than a few schlepps from couch to fridge each day, taking a momentary break from their average 1600 hours in front of the TV each year. (Uncle John’s Bathroom Reader, 2006 p. 115)

Like many other airlines, ours had provided a welcoming speech as the plane pulled up to the gate, helpfully reminding passengers that the most dangerous part of their journey would soon begin. I had always thought that they were alluding to the fact that we would shortly be behind the wheel of our autos, taking our lives into our own unprofessional hands as we attempted to pilot 2 tons of steel on a 65 mph freeway through an obstacle course of text-messaging drivers (which studies show are twice as impaired as a drunk driver). Actually, they were referring to our more immediate mission—to walk without major mishap to the baggage claim area before returning to the relative safety of a seated or prone position in a vehicle, like the humans in the Wall-E movie. A few centuries ago, our ancestors here in America were able to run down buffalo, or even mastodons, and kill them with spears. Today, most Americans can, with some effort, spear a French fry—providing it is not moving too quickly and that they are seated to steady the aim.

Don’t get me wrong. I am not one of the contrarians who reject the argument that lack of access to healthcare by the uninsured contributes to the US’s relatively poor ranking in terms of health outcome. Surely that explains some of our problem. But too little exercise, too much smoking, too much food, and especially too much bad food have got to be a huge factor. As Michael Pollan argues (In Defense of Food, 2008), unless we address the problem with American Food, Inc., we will not significantly improve our health no matter what we do with health care. According to Pollan, the cost to society of the American addiction to “fast food” (which is neither all that fast nor is it food) is already $250 billion per year in diet-related health care costs. One-third of Americans born in 2000 will develop diabetes in her lifetime; on average, diabetes subtracts 12 years from life expectancy, and raises annual medical costs from $2500 for a person without diabetes to $13,000. While it is true that life expectancy today is higher than it was in 1900, almost all of this is due to reduction of death rates of infants and young children—mostly not due to the high tech healthcare that we celebrate as the contribution of our innovative, profit seeking system, but rather to lower tech inoculations, sewage treatment, mosquito abatement, and cleaner water. The life expectancy of a 65 year old in 1900 was only about 6 years less than it is for a 65 year old today—and rates of chronic diseases like cancer and type 2 diabetes are much higher. (Pollan, p. 93)

Smoking causes 400,000 deaths yearly. Simply banning smoking from public places throughout our country could reduce deaths by 156,000 annually. (NPR 22 Sept) We incarcerate a far higher percentage of our population than any developed society on earth—and health care costs in prisons are exploding for the obvious reason that prisons are not healthy environments. Our relatively high poverty rates, and high percentage of the population left outside the labor market (especially young adult males without a high school degree) all contribute to very poor health outcomes. In a very important sense that I will explore thoroughly in the next blog, more health insurance coverage would no more resolve our health care problems than would provision of car insurance to chronic drunk drivers solve our DUI problem.

So, before ramping up health care insurance, how about an education program to teach people the mechanics of walking. It is not as simple as it sounds. I speak from experience because some years ago I tore a calf muscle and after a long and painful healing process, I developed a gait that was all kinds of ugly. A physical therapist helped me to redevelop a human stride. While we are at it, we can reintroduce Americans to food. I don’t mean the corporate offal that Pollan calls “food-like substances”—products derived from plants and animals, but generated by breaking the original foods into their most basic molecules and then reconstituting them in a manner that can be more profitably marketed. What I mean is real food, produced by farmers and consumed after as little processing as possible. Preferably it will be local, cooked at home, eaten at a table, and will consist mostly of vegetables, grains, and fruits. And let us provide decent jobs to anyone ready to work, as an alternative to locking them up in prison. Ban smoking from all public places and regulate tobacco like the highly addictive and dangerous drug that it is. Together these policies will do far more to improve American health and to reduce health care costs than anything that “reformers” are proposing.

To conclude this part of the analysis: the benefits of extending health insurance coverage are almost certainly overstated and are not likely to make a major dent in our two comparative gaps: we spend far more than any other nation but do not obtain better outcomes and in important areas actually get worse results. Nations that adopt diets closer to ours begin to suffer similar afflictions: obesity, diabetes, heart disease, hypertension, diverticulitis, malformed dental arches and tooth decay, varicose veins, ulcers, hemorrhoids, and cancer. (Pollan p. 91) Even universal health insurance is not going to lower the costs of such chronic afflictions that are largely due to the fact that we eat too much of the wrong kinds of food and get too little exercise. It makes more sense to attack the problem directly by increasing exercise, reducing caloric intake, and minimizing consumption of corporate food-like substances that make us sick, than to provide insurance so that those who suffer the consequences of our lifestyle can afford costly care.

Let me be as clear as possible: it is neither rational nor humane to deny health care to any US resident. Further, I accept the arguments contending that early treatment through primary care is far more cost effective than waiting for emergency care—and it is obviously more humane. However in the next blog I will explain why I believe that extending health insurance is the wrong way to go if the goal is to extend health care coverage.

Selling Death: Wall Street’s Newest Bubble

When Wall Street’s commodities bubble crashed last year, I asked whether the next bubble might be in securitized body parts. Wall Street would search the world for transplantable organs, holding them in cold storage as collateral against securities sold to managed money such as pension funds. Of course, it was meant to be an apocryphal story about unregulated banksters gone wild. But as the NYT reports, Wall Street really is moving forward to market bets on death. The banksters would purchase life insurance policies, pool and tranch them, and sell securities that allow money managers to bet that the underlying “collateral” (human beings) will die an untimely death. You can’t make this stuff up.

This is just the latest Wall Street scheme to profit on death, of course. It has been marketing credit default swaps that allow one to bet on the death of firms, cities, and even nations. And the commodities futures speculation pushed by Goldman caused starvation and death around the globe when the prices of agricultural products exploded (along with the price of gasoline) between 2004 and 2008. But now Goldman will directly cash-in on death.

Here is how it works. Goldman will package a bunch of life insurance policies of individuals with an alphabet soup of diseases: AIDS, leukemia, lung cancer, heart disease, breast cancer, diabetes, and Alzheimer’s. The idea is to diversify across diseases to protect “investors” from the horror that a cure might be found for one or more afflictions–prolonging life and reducing profits. These policies are the collateral behind securities graded by those same ratings agencies that thought subprime mortgages should be as safe as US Treasuries. Investors purchase the securities, paying fees to Wall Street originators. The underlying collateralized humans receive a single pay-out. Securities holders pay the life insurance premiums until the “collateral” dies, at which point they receive the death benefits. Naturally, managed money hopes death comes sooner rather than later.

Moral hazards abound. There is a fundamental reason why you are not permitted to take out fire insurance on your neighbor’s house: you would have a strong interest in seeing that house burn. If you held a life insurance policy on him, you probably would not warn him about the loose lug nuts on his Volvo. Heck, if you lost your job and you were sufficiently ethically challenged, you might even loosen them yourself.

Imagine the hit to portfolios of securitized death if universal health care were to make it through Congress. Or the efforts by Wall Street to keep new miracle drugs off the market if they were capable of extending life of human collateral. Who knows, perhaps the bankster’s next investment product will be gansters in the business of guaranteeing lifespans do not exceed actuarially-based estimates.

If you think all of this is far-fetched, you have not been paying attention. From Charles Keating’s admonition to his sales staff that the weak, meek and ignorant elderly widows always make good targets, to recent internal emails boasting about giving high risk ratings to toxic securities, we know that Wall Street’s contempt for the rest of us knows no bounds. Those hedge funds holding CDS “insurance” fought to force the US auto industry into bankruptcy for the simple reason that they would make more from its death than from its resurrection. And the reason that most troubled mortgages cannot obtain relief is because the firms that service the mortgages gain more from foreclosure. It is not a big step for Wall Street and global money managers with big gambling stakes at risk to slow efforts to improve health. Indeed, it is easy to see some very nice and profitable synergies developing between Wall Street sellers of death and health insurers opposed to universal, single-payer health care. As AFL-CIO Secretary Treasurer Trumka recently remarked on NPR, we already have committees deciding when to cut-off care—the private health insurers decide when to deny coverage. It would not be in the interest of securities holders or health insurers to provide expensive care that would prolong the life of human collateral—a natural synergy that someone will notice.

It should be amply evident that Wall Street intends to recreate the conditions that existed in 2005. Virtually every element that created the real estate, commodities, and CDS bubbles will be replicated in the securitization of life insurance policies. If Wall Street succeeds in this scheme, it will probably bankrupt the life insurance companies (premiums are set on the assumption that many policyholders will cancel long before death—but once securitized, the premiums will be paid so that benefits can be collected). But it is likely that the bubble will be popped long before that happens, at which point Wall Street will look for the next opportunity. Securitized pharmaceuticals? Body parts?

Here’s the problem. There is still—even after massive losses in this crisis—far too much managed money chasing far too few returns. And there are far too many “rocket scientists” looking for the next newest and bestest financial product. Each new product brings a rush of funds that narrows returns; this then spurs rising leverage ratios using borrowed funds to make up for low spreads by increasing volume; this causes risk to rise far too high to be covered by the returns. Eventually, lenders and managed money try to get out, but de-levering creates a liquidity crisis as asset prices plunge. Resulting losses are socialized as government bails-out the banksters. Repeat as often as necessary.

Reform of the US financial sector is neither possible nor would it ever be sufficient. As any student of horror films knows, you cannot reform vampires or zombies. They must be killed (stakes through the hearts of Wall Street’s vampires, bullets to the heads of zombie banks). In other words, the financial system must be downsized.

Fed Profits From The Bailout? Don’t Count Those Chickens Before They Hatch

A flurry of reports appears to indicate simultaneously that the government’s bailout of Wall Street is working to bring banks back to life, and that the Fed is making profits on the deal: See for instance here, here and here. According to these reports, the Fed received $19 billion in interest on its emergency loans to troubled institutions, which was about $14 billion more than it would have received if it had instead bought Treasuries. In addition, it was reported that the Fed made $4 billion of profits from the eight largest banks that have repaid TARP funds. This is seen as good news in Congress: “The taxpayers want their money back and they want the government out of our banking system,” said Representative Jeb Hensarling.

There are three fundamental problems with these stories. The first is that it raises a question about the function of the Fed: should a branch of government seek profits at the expense of the private sector? As we all know, the Fed is normally profitable and returns earnings above 6% on equity back to the Treasury. When the Fed profits on its holdings of government debt, that is essentially the government paying itself and thus of no consequence. Profiting at the expense of private institutions—even if they are the hated “banksters” of Wall Street—doesn’t seem to be something to celebrate. Especially when we are in a deep recession or depression, with a long way to go before the financial system recovers. All else equal, I’d rather see the private sector accumulate some profits so that it can recover.

Don’t get me wrong. I want retribution, too. We need a “bank holiday”, to begin next Friday: close suspect banks including all of the biggest that were subjected to the wimpy “stress test”. Spend the weekend going through the books and then on Monday begin to resolve the insolvent, to prosecute the banksters for fraud, and to retrieve from them their outsized bonuses financed by the public purse. It is payback time. My only objection is to the notion that we should be celebrating because the Fed appears to have made a profit on (a small part of) the bailout.

Second, there is every reason to suspect that the banks that have repaid TARP money and those making payments on loans from the Fed are still massively insolvent at any true valuation of the toxic waste still on their balance sheets. Recent reported profit in the financial sector is window-dressing, designed to fuel the irrationally exuberant stock market bubble. This will allow traders and other employees to cash in their stock options to recover some of the losses they incurred last year, even as bonuses are boosted for this year. Indeed, the main reason for returning the borrowed funds is to escape controls on executive salaries and bonuses. And, finally, it provides some important PR to counteract the growing anger over the financial bailouts. Only the foolish or those with some dog in the hunt will believe that the banksters have really managed to restore health to the financial sector as they continue to do what they did to cause the crisis.

Last but not least, the Fed’s “profits” are based on an infinitesimally small fraction of its ramped-up operations—its liquidity facilities (that also include discount window loans and currency swaps with other central banks, purchases of commercial paper and financing for investors in asset-backed securities). It has also spent $1.75 trillion buying bad assets, with any losses on those excluded from the profits numbers. The government’s profits also exclude current and expected spending on the rest of its reported $23.7 trillion dollar commitment to the financial bailout and fiscal stimulus package. The failure of just one medium-sized bank could easily wipe out the entire $14 billion of profits that has attracted so much notice. (See also Dean Baker on the government’s “profits”)

It is ironic that Euroland’s regulators are calling for much more radical steps than Washington is willing to take. German Chancellor Angela Merkel and French President Nicolas Sarkozy are calling for more regulation and for limits on executive compensation even as the Obama administration continues to argue that such limits would constrain the financial sector’s ability to retain the “best and the brightest”. If the bozos that created this crisis are the best that Wall Street can find, it would be better to shut down the US financial system than to keep them in charge. It is doubly ironic that Nigeria (a country that normally would not come immediately to mind as a role model) has actually charged the leadership of five of its major banks with crimes. Each of these banks had received government money in a bailout, and the CEOs stand accused of “fraud, giving loans to fake companies, lending to businesses they had a personal interest in and conspiring with stockbrokers to drive up share prices.”

Isn’t that normal business practice for Wall Street banks favored by Ben Bernanke and Timothy Geithner? It is time to get the NY Fed out of Goldman’s back pocket, and to permanently downsize the role played by Wall Street and the Fed in our economic system.

Money as a Public Monopoly

By L. Randall Wray

What I want to do in this blog is to argue that the reason both theory and policy get money “wrong” is because economists and policymakers fail to recognize that money is a public monopoly*. Conventional wisdom holds that money is a private invention of some clever Robinson Crusoe who tired of the inconveniencies of bartering fish with a short shelf-life for desired coconuts hoarded by Friday. Self-seeking globules of desire continually reduced transactions costs, guided by an invisible hand that selected the commodity with the best characteristics to function as the most efficient medium of exchange. Self-regulating markets maintained a perpetually maximum state of bliss, producing an equilibrium vector of relative prices for all tradables, including the money commodity that serves as a veiling numeraire.

All was fine and dandy until the evil government interfered, first by reaping seigniorage from monopolized coinage, next by printing too much money to chase the too few goods extant, and finally by efficiency-killing regulation of private financial institutions. Especially in the US, misguided laws and regulations simultaneously led to far too many financial intermediaries but far too little financial intermediation. Chairman Volcker delivered the first blow to restore efficiency by throwing the entire Savings and Loan sector into insolvency, and then freeing thrifts to do anything they damn well pleased. Deregulation, which actually dates to the Nixon years and even before, morphed into a self-regulation movement in the 1990s on the unassailable logic that rational self-interest would restrain financial institutions from doing anything foolish. This was all codified in the Basle II agreement that spread Anglo-Saxon anything goes financial practices around the globe. The final nail in the government’s coffin would be to preserve the value of money by tying monetary policy-maker’s hands to inflation targeting, and fiscal policy-maker’s hands to balanced budgets. All of this would lead to the era of the “great moderation”, with financial stability and rising wealth to create the “ownership society” in which all worthy individuals could share in the bounty of self-regulated, small government, capitalism.

We know how that story turned out. In all important respects we managed to recreate the exact same conditions of 1929 and history repeated itself with the exact same results. Take John Kenneth Galbraith’s The Great Crash, change the dates and some of the names of the guilty and you’ve got the post mortem for our current calamity.

What is the Keynesian-institutionalist alternative? Money is not a commodity or a thing. It is an institution, perhaps the most important institution of the capitalist economy. The money of account is social, the unit in which social obligations are denominated. I won’t go into pre-history, but I trace money to the wergild tradition—that is to say, money came out of the penal system rather than from markets, which is why the words for monetary debts or liabilities are associated with transgressions against individuals and society. To conclude, money predates markets, and so does government. As Karl Polanyi argued, markets never sprang from the minds of higglers and hagglers, but rather were created by government.

The monetary system, itself, was invented to mobilize resources to serve what government perceived to be the public purpose. Of course, it is only in a democracy that the public’s purpose and the government’s purpose have much chance of alignment. In any case, the point is that we cannot imagine a separation of the economic from the political—and any attempt to separate money from politics is, itself, political. Adopting a gold standard, or a foreign currency standard (“dollarization”), or a Friedmanian money growth rule, or an inflation target is a political act that serves the interests of some privileged group. There is no “natural” separation of a government from its money. The gold standard was legislated, just as the Federal Reserve Act of 1913 legislated the separation of Treasury and Central Bank functions, and the Balanced Budget Act of 1987 legislated the ex ante matching of federal government spending and revenue over a period determined by the celestial movement of a heavenly object. Ditto the myth of the supposed independence of the modern central bank—this is but a smokescreen to protect policy-makers should they choose to operate monetary policy for the benefit of Wall Street rather than in the public interest (a charge often made and now with good reason).

So money was created to give government command over socially created resources. Skip forward ten thousand years to the present. We can think of money as the currency of taxation, with the money of account denominating one’s social liability. Often, it is the tax that monetizes an activity—that puts a money value on it for the purpose of determining the share to render unto Caesar. The sovereign government names what money-denominated thing can be delivered in redemption against one’s social obligation or duty to pay taxes. It can then issue the money thing in its own payments. That government money thing is, like all money things, a liability denominated in the state’s money of account. And like all money things, it must be redeemed, that is, accepted by its issuer. As Hyman Minsky always said, anyone can create money (things), the problem lies in getting them accepted. Only the sovereign can impose tax liabilities to ensure its money things will be accepted. But power is always a continuum and we should not imagine that acceptance of non-sovereign money things is necessarily voluntary. We are admonished to be neither a creditor nor a debtor, but try as we might all of us are always simultaneously both. Maybe that is what makes us Human—or at least Chimpanzees, who apparently keep careful mental records of liabilities, and refuse to cooperate with those who don’t pay off debts—what is called reciprocal altruism: if I help you to beat the stuffing out of Chimp A, you had better repay your debt when Chimp B attacks me.

OK I have used up two-thirds of my allotment and you all are wondering what this has to do with regulation of monopolies. The dollar is our state money of account and high powered money (HPM or coins, green paper money, and bank reserves) is our state monopolized currency. Let me make that just a bit broader because US Treasuries (bills and bonds) are just HPM that pays interest (indeed, Treasuries are effectively reserve deposits at the Fed that pay higher interest than regular reserves), so we will include HPM plus Treasuries as the government currency monopoly—and these are delivered in payment of federal taxes, which destroys currency. If government emits more in its payments than it redeems in taxes, currency is accumulated by the nongovernment sector as financial wealth. We need not go into all the reasons (rational, irrational, productive, fetishistic) that one would want to hoard currency, except to note that a lot of the nonsovereign dollar denominated liabilities are made convertible (on demand or under specified circumstances) to currency.

Since government is the only issuer of currency, like any monopoly government can set the terms on which it is willing to supply it. If you have something to sell that the government would like to have—an hour of labor, a bomb, a vote—government offers a price that you can accept or refuse. Your power to refuse, however, is not that great. When you are dying of thirst, the monopoly water supplier has substantial pricing power. The government that imposes a head tax can set the price of whatever it is you will sell to government to obtain the means of tax payment so that you can keep your head on your shoulders. Since government is the only source of the currency required to pay taxes, and at least some people do have to pay taxes, government has pricing power.

Of course, it usually does not recognize this, believing that it must pay “market determined” prices—whatever that might mean. Just as a water monopolist cannot let the market determine an equilibrium price for water, the money monopolist cannot really let the market determine the conditions on which money is supplied. Rather, the best way to operate a money monopoly is to set the “price” and let the “quantity” float—just like the water monopolist does. My favorite example is a universal employer of last resort program in which the federal government offers to pay a basic wage and benefit package (say $10 per hour plus usual benefits), and then hires all who are ready and willing to work for that compensation. The “price” (labor compensation) is fixed, and the “quantity” (number employed) floats in a countercyclical manner. With ELR, we achieve full employment (as normally defined) with greater stability of wages, and as government spending on the program moves countercyclically, we also get greater stability of income (and thus of consumption and production)—a truly great moderation.

I have said anyone can create money (things). I can issue IOUs denominated in the dollar, and perhaps I can make my IOUs acceptable by agreeing to redeem them on demand for US government currency. The conventional fear is that I will issue so much money that it will cause inflation, hence orthodox economists advocate a money growth rate rule. But it is far more likely that if I issue too many IOUs they will be presented for redemption. Soon I run out of currency and am forced to default on my promise, ruining my creditors. That is the nutshell history of most private money (things) creation.

But we have always anointed some institutions—called banks—with special public/private partnerships, allowing them to act as intermediaries between the government and the nongovernment. Most importantly, government makes and receives payments through them. Hence, when you receive your Social Security payment it takes the form of a credit to your bank account; you pay taxes through a debit to that account. Banks, in turn, clear accounts with the government and with each other using reserve accounts (currency) at the Fed, which was specifically created in 1913 to ensure such clearing at par. To strengthen that promise, we introduced deposit insurance so that for most purposes, bank money (deposits) functions like government currency.

Here’s the rub. Bank money is privately created when a bank buys an asset—which could be your mortgage IOU backed by your home, or a firm’s IOU backed by commercial real estate, or a local government’s IOU backed by prospective tax revenues. But it can also be one of those complex sliced and diced and securitized toxic waste assets you’ve been reading about. A clever and ethically challenged banker will buy completely fictitious “assets” and pay himself huge bonuses for nonexistent profits while making uncollectible “loans” to all of his deadbeat relatives. (I use a male example because I do not know of any female frauds, which is probably why the scales of justice are always held by a woman.) The bank money he creates while running the bank into the ground is as good as the government currency the Treasury creates serving the public interest. And he will happily pay outrageous prices for assets, or lend to his family, friends and fellow frauds so that they can pay outrageous prices, fueling asset price inflation. This generates nice virtuous cycles in the form of bubbles that attract more purchases until the inevitable bust. I won’t go into output price inflation except to note that asset price bubbles can fuel spending on consumption and investment goods, spilling-over into commodities prices, so on some conditions there can be a link between asset and output price inflations.

The amazing thing is that the free marketeers want to “free” the private financial institutions to licentious behavior, but advocate reigning-in government on the argument that excessive issue of money is inflationary. Yet we have effectively given banks the power to issue government money (in the form of government insured deposits), and if we do not constrain what they purchase they will fuel speculative bubbles. By removing government regulation and supervision, we invite private banks to use the public monetary system to pursue private interests. Again, we know how that story ends, and it ain’t pretty. Unfortunately, we now have what appears to be a government of Goldman, by Goldman, and for Goldman that is trying to resurrect the financial system as it existed in 2006—a self-regulated, self-rewarding, bubble-seeking, fraud-loving juggernaut.

To come to a conclusion: the primary purpose of the monetary monopoly is to mobilize resources for the public purpose. There is no reason why private, for-profit institutions cannot play a role in this endeavor. But there is also no reason to believe that self-regulated private undertakers will pursue the public purpose. Indeed, as institutionalists we probably would go farther and assert that both theory and experience tell us precisely the opposite: the best strategy for a profit-seeking firm with market power never coincides with the best policy from the public interest perspective. And in the case of money, it is even worse because private financial institutions compete with one another in a manner that is financially destabilizing: by increasing leverage, lowering underwriting standards, increasing risk, and driving asset price bubbles. Unlike my ELR example above, private spending and lending will be strongly pro-cyclical. All of that is in addition to the usual arguments about the characteristics of public goods that make it difficult for the profit-seeker to capture external benefits. For this reason, we need to analyze money and banking from the perspective of regulating a monopoly—and not just any monopoly but rather the monopoly of the most important institution of our society.

* Much confusion is generated by using the term “money” to indicate a money “thing” used to satisfy one of the functions of money. I will be careful to use the term “money” to refer to the unit of account or money as an institution, and “money thing” to refer to something denominated in the money of account—whether that is currency, a bank deposit, or other money-denominated liability