Mitchell's Musings

  • 21 Oct 2016 11:36 AM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 10-17-16: Most Economists

    Daniel J.B. Mitchell

    I happened to hear a public radio broadcast from NPR recently on the recent depreciation of the British pound and the Brexit vote of last June.[1] The program began with this sentence:

    Since the U.K. voted to leave the European Union last summer, the country's currency - the pound - has lost about 16 percent of its value against the dollar. Most of the damage, according to economists, was self-inflicted.

    It ended with this sentence:

    The pound dropped again this morning trading below $1.23. Most economists think it has yet to hit bottom.

    In between the beginning and the end, there was what you might expect. There were references to the Brexit vote of June, anecdotes on how foreign tourists in Britain were benefiting from reduced costs, etc. But let’s start with the beginning sentence which references the fall in the pound.


    Source: as of October 11, 2016.


    As the chart above shows, relative to the euro, the pound at this writing is about where it was during and in the aftermath of the Great Recession. Until the Brexit vote, it tended to rise relative to the euro. When the vote occurred, it fell. And the pound has generally fallen since.

    The NPR program describes the fall in the pound as “damage” which was “self-inflicted.” There is no doubt that that the Brexit vote was an Act of Man rather than an Act of God. But is it correct to view a currency depreciation as “damage”? Note that if the pound declined relative to the euro, it is also true (by inversion of the pound/euro ratio) that the euro appreciated relative to the pound. So was the euro-zone “helped” by its currency’s appreciation?

    Other things equal, the decline in the pound made British exports more competitive and imports to Britain less competitive. So on that dimension, you could just as well say Britain was helped and the euro-zone was damaged. Suppose you applied the same logic to the U.S. and its dollar that the NPR broadcast applied to Britain and its pound. The chart below shows an index of the U.S. dollar relative to the currencies of its trading partners since 2000.

    If you equate the exchange rate with national welfare, we were never as well off as we were just after the dot-com bust and the related recession. During the recovery from that recession, things got progressively worse if we use the exchange rate as our measure. The Great Recession then gave our welfare a big boost temporarily. But the recovery from that recession made us worse off again. We didn’t see a big improvement until the 2015-16 election cycle made the future of U.S. economic policy uncertain. If nothing in that interpretation makes much sense to you, you now can see the folly of identifying exchange rate trends with national welfare.


    Source: FRED database of the Federal Reserve Bank of St. Louis.


    There seems to be an underlying assumption in the NPR broadcast that the fact that Brexit inherently changed some fundamental determinants of the British exchange rate means the pound must be lower than it was. However, the demand for the pound is ultimately a function of the demand for British exports and for British investment assets (British stocks, bonds, real property, etc.); the supply of the pound is ultimately a function of British demand for imports and foreign investment assets. How the demand and supply will balance out once the dust settles, i.e., what the eventual long-term exchange rate will be, is unknown. It will depend on such things as British inflation relative to that of its trading partners and rates of saving at home and abroad. But note that the question of whether or not Brexit was a good idea for Britain in terms of its national economic and political welfare is simply not the same thing as the exchange rate.

    Well, that’s fundamentals. The broadcast closes with the idea that “most economists” think – was there a survey? – that the pound will fall further. That prognosis isn’t accompanied by a time period. Is it by tomorrow? By next week? Whatever the time period may be, it seems to be a short-term prediction. And if it is short term, it should also be the case that most economists are going short on the pound because they know it will soon fall. But is there any evidence that, for example, British economists have been putting their holdings in euros? Are they going further and borrowing pounds and then investing them in euro-denominated assets? If it were evident that the pound would be notably lower in value relative to the euro in the near future, the rush into euros would make it lower relative to the euro today.

    Bottom lines:

    1) Will the pound be lower tomorrow than it was today? If you say “yes,” you have about a 50-50 chance of being right.

    2) Is it a Bad Thing for Britain that the pound exchange rate is lower than it was pre-Brexit? Other things equal, depreciation of the pound stimulates British exports and discourages imports. So let’s just say that the answer is more complicated than assuming that national welfare moves with the exchange rate.

    3) Finally, what should NPR have said in its broadcast? Probably not much more than with the decline in the pound, Americans might want to consider a London holiday.


    [1] “Brexit Results Prove Increasingly Costly to Britons,” Morning Edition, October 11, 2016. Available at:

  • 02 Oct 2016 4:27 PM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 10-3-16: Fed Politics

    Daniel J.B. Mitchell

    At the September meeting of the Federal Reserve, interest rates were not changed. Presidential candidate Donald Trump indicated he thought the Fed was being political, i.e., helping his opponent. Of course, decision makers at the Fed are aware that there is an election cycle in progress. And, in a sense, they were responding to political events, although not in the way Trump suggests.

    First, it should be noted that the case for raising rates at this time is weak. Not only is inflation low, it is expected to remain low. As the chart below indicates, financial markets are not anticipating a burst of inflation. The spreads between conventional Treasury securities and inflation-adjusted Treasuries –indexes of the expected rate of price inflation - remain below 2%. In fact, except during the Great Recession and its immediate aftermath, expectations are lower now than they were during previous years when the economy was softer. Put another way, price inflation expectations remain low, despite a seemingly tightening economy, so maybe the economy isn’t all that tight.

    And that is a second point. Despite a decline in unemployment to around 5% (the latest monthly figure is 4.9%), it is still unclear if the low level of unemployment in fact signifies “full employment.” As has been much discussed, the employment-to-population ratio is below previous peaks. The latest UCLA Anderson Forecast featured a chart for California. The equivalent for the U.S. as a whole would show much the same thing. As can be seen below, the ratio is still lower than at the previous cyclical peak. One explanation is that demographic changes account for the reduction. When you make “corrections” for demographic trends, the current ratio seems to be roughly equivalent to the prior peak, as the straight line on the chart shows.

    But even with that correction, there remains ambiguity. Why not consider the peak before the prior peak, i.e., the dot-com boom peak? The demographic trend would produce a line with a similar slope from that peak, and we would be we below it. In short, the evidence for the proposition that we have hit some kind of capacity constraint is ambiguous. Which peak is really THE peak for comparison?

    You could look at wage behavior rather than at prices. There, we do find some evidence of wage growth acceleration. The latest data on the Employment Cost Index did show some acceleration. But as the chart below indicates, the series can be volatile. The series stayed at about 2% per annum during much of the recovery, blipped up in 2014, but then came down again. So surely a case could be made for waiting a few quarters before passing a final judgment and making a policy change.

    Employment Cost Index, Total Compensation, Private Sector, 12-Month Percent Changes

    On the output side, real GDP growth has been relatively anemic so far in 2016. Generally, recovery from the trough of the Great Recession to the present has been slow. Perhaps, not surprisingly, that fact has given rise to discussion of whether the post-World War II growth rate was anomalously fast. But even if there is now a “new normal” of real growth at around 2% per annum, we haven’t seen even that pace in 2016. During the first half of the year, measured real growth was a bit over 1% per annum.

    Bottom line: Even if 2016 were not a presidential election year, the Fed might have been dovish about raising interest rates. When there is uncertainty about diagnosing economic conditions, the tendency is not to shift policy.[1] Apart from the unknowns described above, however, the presidential election itself – or rather its outcome – has created an even larger element of uncertainty than price inflation, wage inflation, or real growth.

    One interesting feature of the September UCLA Anderson Forecast was an attempt at economic modeling of the election results by Forecast economist William Yu. In recent years, there has been an increased interest in political forecasting based on economic conditions. Yu developed a model across states and time using as explanatory variables general economic trends, demographics, past voting behavior, and other factors. He looked particularly at electoral votes in the “swing” states. And the conclusion was that the election was too close to call. Popular votes in the swing states hovered around 50-50 for the two major party candidates.

    Given the oddities of the 2016 election, and the possibility that there could be economic turmoil when the results are known, why would the Fed raise interest rates two months before Election Day? It’s likely that election uncertainty, more than inflation uncertainty, drove the decision. It’s fine to forecast, but surely confidence in the results of the exercise must drop when you don’t know who will be making policy. In particular, what a Trump presidency would mean for economic policy and economic outcomes is hard to predict. In that sense, you could say the decision at the Fed to wait until after the election before making a change in monetary policy was “political.”


    [1] Some participants on the Federal Open Market Committee did want to raise rates. None of the members of the Board of Governors, however, voted for such an increase. The degree of uncertainty regarding economic trends is reflected in the official transcript of the news conference after the September decision:

    Fed Chair Janet Yellen in response to a reporter's question after the interest rate decision: "...I think we are trying to understand some difficult issues. There is less disagreement among participants in the (Federal Open Market) Committee than you might think, listening to speeches and commentary. I think we all agree that the economy is making progress, that we are close to an unemployment rate that is one that’s sustainable in the longer run. We all agree we are undershooting our inflation goal, and that we want to make sure we stay on a course that raises that to 2 percent. And we’re struggling with a difficult set of issues about what is the “new normal” in this economy and in the global economy more generally, which explains why we keep revising down the (real growth) rate path..."


  • 23 Sep 2016 11:40 AM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 9-26-16: Measurement for What? – Part 2

    Daniel J.B. Mitchell

    I had hoped to be done, for a while anyway, with defined-benefit pensions with last week’s musing.[1] But it was not to be. Recently, the New York Times got interested in CalPERS, the giant California pension fund that covers most state employees (other than employees of the University of California [UC]) and many local government employees.[2] The essence of the Times article is that something nefarious is going on because when a local government wanted to terminate its pension plan with CalPERS, CalPERS used a low interest rate to calculate the liability that the government would have to pay to CalPERS. The rate was lower than CalPERS official expected rate of earnings which is otherwise used by CalPERS as a discount rate for liabilities generally.

    So let’s separate some issues here. Last week, we dealt with the question of using a lower interest rate for discounting pension liabilities than is used for expected earnings on pension assets. We noted that some folks argue that because the pension liability to employees and retirees is ironclad, the discount rate should reflect a riskless investment. What we indicated last week was that an ongoing pension plan has no finite duration and that, if over a long period it earns what it expects and funds the plan in accord with that expectation, it will have enough money in the till to meet its liabilities. Apparently, there are some folks (including an adviser to the UC Board of Regents) who think that if you expect to earn, say, 6% per annum, you fund accordingly, and you in fact earn 6%, you will still run out of money unless you discount liabilities by a lower rate than 6%. Simple arithmetic says that conclusion is wrong.

    Of course, the big IF in that idea is that you in fact earn in the long run what you today reasonably can expect. If you use an unreasonably high rate of return to discount future liabilities and to estimate future earnings, you will indeed come up short. The lesson is that you should use a reasonable rate. And you should keep adjusting that estimated rate based on incoming information. There is little question about that simple notion.

    What the Times seems to be saying is that using the lower rate for the local government that wanted to terminate its plan proves somehow that CalPERS’ expected earnings rate is too high and that really the termination discount rate for liabilities is what it should be using to calculate its funding ratio. To be fair, the article is not entirely clear about what is wrong, but the reader is left with the impression that using two rates shows something is phony. The headline with “two sets of books” suggests outright corruption and false measurement.

    According to the CalPERS actuary, CalPERS’ official estimate rate of earnings, 7.5%/annum, is too high. But that view has nothing to do with the rate charged to terminating plans. The actuary simply believes that, looking ahead, earnings will be lower over the long term than 7.5%/annum.

    CalPERS should be using a reasonable rate, as recommended by its actuary. If it isn’t, that failure is a sign of bad governance. (And there have been problems over the years with bad governance at CalPERS, including outright corruption.) But none of this concern is related to the low termination discount rate.

    CalPERS is actually a set of plans, not a single plan. If a local jurisdiction has a pension plan for its employees, CalPERS separately calculates its liabilities. Liabilities will vary from jurisdiction to jurisdiction depending on employee demographics and behavior. If a local government decides to terminate its plan, the plan still has a liability to covered incumbent workers and retirees. Termination shifts the plan from one with an indefinite duration to one with a finite end. Someday, the last participant will die and the plan will truly end. CalPERS must ensure that it has enough money in that plan to pay off that last participant. By law, it cannot take money from other jurisdictions’ plans and subsidize any remaining shortfall in the terminating plan.[3]

    Given the shift from an indefinite duration to a finite duration, and given the bar against moving money from one plan to another, CalPERS must take a reduced risk approach to the terminating plan. It must invest in low risk assets. So, of course, it uses a low discount rate because the plan itself will earn a low rate on its low-risk assets. There is nothing nefarious about using a lower rate for terminating plans; it is just prudent pension management.[4]

    In short, CalPERS does have management problems. It should be using realistic estimates of future earnings. But the fact that it discounts terminating plans using a low interest rate is entirely appropriate.




    [3] California governor Jerry Brown recently vetoed an ad hoc bill that would have allowed moving money from one plan to another because it would violate the longstanding principle that each plan is a separate entity. See

    [4] The California Legislative Analyst’s Office recognizes that prudence requires low-risk (and thus low-return) assets for plans that are terminating. It provided an analysis of a proposed ballot initiative (that never made it to the ballot) that would have phased out defined-benefit pensions and noted that “there are costs related to the closure of defined benefit plans. As these plans ‘wind down’ over the decades, their pension boards likely would change investments to those with lower risk and lower expected returns. This would result in higher costs for these closed plans.” Source:  

  • 17 Sep 2016 9:14 AM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 9-19-2016: Measurement for What?

    Daniel J.B. Mitchell

    I am going to take up defined-benefit pensions in this musing, but let’s start with a simple example. Suppose I wish to have $1,000 thirty years from now. How much should I put aside today as a lump sum to have that target amount in thirty years?

    Clearly, the answer has to depend on what I think I will earn over the thirty-year period with my lump sum. But it is hard to be sure what the earnings rate will be unless I invest in something that is relatively riskless and that has that thirty-year duration.  Let’s suppose that there is such an asset in the marketplace and it carries an annual yield of 4%. As it turns out, if I put aside about $308 today and invest it in that asset, I will have my $1,000 in thirty years.[1]

    However, suppose I were to invest in a reasonably prudent mix of stocks and bonds which are not as secure as the asset yielding 4%/annum, but which I believe – based on advice of experts – can be expected to yield 6%/annum. “Expected” is not the same as a sure thing, but the experts, looking at past long-term history and what they can see looking ahead, think 6% is a reasonable expectation. Of course, the experts note as a proviso that they could be wrong and that the actual result might turn out to be more or less than 6%.

    As it turns out, if I were willing to take the risk that the 6%/annum target might not be achieved, I could put aside only $174 today to get my hoped-for $1,000 in thirty years.[2] That is, if things work out as expected, my $174 will grow with compound interest at 6% and become the target $1,000. And had I instead put aside $308, I would find myself thirty years from now with an “extra” $771.

    Suppose further that in the period before I made the decision on how much to put aside today, expert advisors had been telling me that one could expect 7%/annum on average over a thirty-year period. But at the moment of decision, they told me that in view of recent adverse developments, they now believed 6% was more realistic. What would happen if I chose to believe that my advisors were being overly pessimistic due to recent developments (panicking) and that it would be better to keep the 7%/annum assumption? In that case, I would put aside only $131 today.[3] If keeping to 7% turned out to be correct, I would have my $1,000 in thirty years. But if my advisors were correct with their downward revision and the actual return was 6%, I would be short by about $254.

    The most obvious point of this tale is that the more I put away today, the more likely it is that I will have at least $1,000 in thirty years. Put another way, if I follow what a low long-term rate of return implies in deciding how much to put away, I increase the odds of at least achieving my target. Note that there are really two steps implied. First, I assume a low rate. Second, because I assumed a low rate, I decide to put more money away today. These are separable events.

    Suppose we translate these numerical examples into pension terms and suppose that actual rate of return turned out to be exactly the advisor-forecast of 6%/annum. The liability of my plan is $1,000 thirty years from now. If I had discounted that liability by 6% and had – as a result - put away $174 today, the plan would turn out to be fully funded. (My current ratio of assets to discounted liabilities = 1 or 100%.) If I had put away $308, the plan would be overfunded by 77%. ($308/$174 = 1.77) If I had put away only $131, my funding ratio would be only 75%. ($131/$174 = .75) I would be underfunded by 25%.

    I have gone through this arithmetic because the University of California (UC) defined-benefit pension is officially underfunded and the UC Regents – as plan trustees – periodically mull over what to do about it. The plan uses a methodology which estimates the discounted value of its liabilities to future retirees by using the same discount rate as the rate officially estimated by the UC Regents to be their long-term expected annual rate of return.  Currently, the officially expected rate of return is 7.25%/annum.[4] However, it is clear from public statements of the Regents’ chief investment officer and his staff that they (the CIO and his staff) believe a 6-ish long-term rate is more realistic going forward than a 7-ish rate.

    Moreover, there is at least one outside advisor on the Regents’ Committee on Investments who is arguing that even if the 6-ish rate is a reasonable expectation of the long-term return, the discount rate that should be applied to the liability of the plan is a 4-ish number.[5] That 4-ish number, in his view, seems to equate to what the State of California pays on long-term general obligation bonds. (The state in fact pays less than that because its bonds are tax-exempt, but if you adjust the yields to the equivalent taxable rate, they would have a 4-ish return.) His argument is that if the state’s pension liabilities are as firm as its bond liabilities, the same rate would be used for both.

    There are two different issues here. The first is whether the Regents should lower their official expectation of a return to the level their own expert is telling them is appropriate (i.e., from a 7-ish expectation to a 6-ish rate). Presumably they should, unless they truly believe he is being panicked by recent developments and that the old official 7-ish assumptions are still valid. But if they instead believe what their expert is saying to them now, that belief will tell them that the plan is more underfunded than current methodology indicates. So the corollary is that if they lower the official expected return, they should also up the contributions to the plan appropriately. Again, as in our earlier example, there is a two-step process here. First, a change in the assumption and, second, acting on that assumption.

    If they don’t take the second step, the plan would gradually have a lower and lower asset balance relative to liabilities and would someday become a pay-as-you-go arrangement. That is, each cohort of employees would – at some date in the future - end up being “taxed” to pay the pensions of previous cohorts. As a practical matter, without the availability of earnings from a large investment pool of assets, the pay-as-you-go cost would be very high.

    Another issue, however, is whether it makes sense to have a lower discount rate than the expected earnings rate, as the outside expert is arguing. Lowering the discount rate for liabilities will up the measured value of those liabilities and thus lower the measured value of the funding ratio. If more contributions result, the likelihood of at least having the assets needed to pay those liabilities goes up. But note that measured value is not the same as actual value. The actual value will depend on actual future earnings. If you believe those earnings to be 6-ish, and yet you fund on the basis of 4-ish, your plan assets will gradually rise relative to liabilities and will do so indefinitely. Presumably, at some level of overfunding, the contributions would be halted.[6]

    A key point about a pension plan – which differentiates it from the $1,000-in-thirty-years example – is that a pension plan goes on indefinitely; there is no finite maturity. On a regular basis, estimates of liabilities and expected future rates of return should be adjusted iteratively. There really isn’t a behavioral reason to pick a number such as 4% for discounting on the grounds that pension liabilities are in theory similar to state bond liabilities. The 4% vs. 6% differential might be taken to be an indication some measure of risk to employees. That is, other things equal, employees might be willing to contribute more to a plan which guaranteed an outcome rather than one that produced only an expected outcome. (But there is much research to suggest that ordinary folks have big problems in evaluating risk.)

    Just blowing up the measured liability relative to assets by assuming a lower discount rate than the assumed earnings rate might well have perverse results. It could make the current underfunding problem seem intractable and lead to Regental paralysis and no solution. The purpose of computing the funding ratio is managerial, not theoretical. You make the calculation to help guide management decisions.

    The best outcome, and the one most feasible, would be for the Regents a) to lower their expected earnings rate (and their liability discount rate) to what the CIO and his staff are suggesting is reasonable, and b) to modify their funding plan to accord with that rate, used as both the projected earnings rate and the discount rate. And they should follow that approach periodically and make iterative adjustments as needed.


    [1] $308.32 x (1.04)­­­30 = $1,000

    [2] $174.11 x (1.06)30 = $1,000

    [3] $131.37 x (1.07)30 = $1,000

    [4] The rate was recently lowered from 7.50%.

    [5] You can hear the September 9, 2016 meeting of the committee at The comments of the outside advisor are at approximately 1:29 on the audio recording.  

    [6] We know in fact that when the plan became overfunded in the 1980s as officially measured, contributions were halted for two decades. So there is evidence that actual contribution behavior responds to the measured value of the funding ratio. We know in fact that – with a long lag – when the official measure of the ratio came to show underfunding, contributions were resumed. There may well be asymmetry between the reaction to overfunding and the reaction to underfunding. And, as noted below, a high measured level of underfunding could cause paralysis.

  • 12 Sep 2016 8:23 AM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 9-12-16: Chicago

    Daniel J.B. Mitchell

    We are used to the idea that movies are rated for “mature” content (PG-13, R, etc.). TV shows also are similarly rated. Sometimes radio and TV programs are prefaced with a statement that there is language some might find offensive. So is there any real difference between these types of warning systems and the “trigger warnings” for college courses that have been in the news in recent years?

    The use of such warnings in course syllabi – in contrast to movie, TV, and radio warnings – has produced (triggered?) substantial controversy. An engineering professor at Auburn University recently got himself fifteen minutes of fame by putting the statement "TRIGGER WARNING: physics, trigonometry, sine, cosine, tangent, vector, force, work, energy, stress, quiz, grade” at the top of his syllabus as a parody.[1] Much more attention was paid – because it wasn’t intended as a joke – to an orientation statement from a dean at the University of Chicago: "Our commitment to academic freedom means that we do not support so-called trigger warnings, we do not cancel invited speakers because their topics might prove controversial and we do not condone the creation of intellectual safe spaces where individuals can retreat from ideas and perspectives at odds with their own."[2]

    Obviously, despite the noncontroversial precedent of movie ratings, the trigger movement is being taken by some as a serious matter, although how widespread the concern is among practicing academics can be questioned.[3] In particular, the Chicago statement received substantial attention and applause – more from the political right than elsewhere.[4] LA Times columnist Maureen Daum – who is not a right-wing observer – argued that the use of triggers, by itself not a big deal in her view, has become mixed up with “the much larger phenomenon of leftist groupthink masquerading as liberalism.”[5] That is, in her view a practice that is relatively harmless has come to stand for something else.

    Note that the Chicago statement is self-contradictory. In order to protect free speech, it seems to support a ban on the use of a type of statement (speech) by an instructor on a syllabus. You can argue whether a policy of “not supporting” use of trigger warnings is an absolute ban on them. But given that statement, if you were a junior (untenured) faculty member at Chicago, might you not be reluctant to use a trigger? Would you want to be doing something which the university doesn’t “support”? The statement, after all, is being made by a campus authority figure. If the dean were just expressing his personal opinion, he could have said so, and then further clarified that he was not articulating an official policy. But that wasn’t what he did.

    There is a difference between someone deciding not to go to an R-rated movie to avoid the offensive content and a student responding to a trigger warning on a syllabus. The former situation represents a choice about an entertainment. The latter might be interpreted by the student as meaning you don’t have to read material that might offend you and yet you would nonetheless receive credit for the course. Or it might be taken to mean that you don’t have to read what offends you and that the instructor is bound to supply you with alternative readings that you prefer.

    Just putting a trigger warning on a syllabus without further comment is potentially confusing. If the potentially offensive readings are nonetheless required, the syllabus should so indicate. You don’t have to go to an R-rated movie and there really is no consequence if you don’t. But if you take a course, you do have to do the assigned work. And some courses, moreover, are required for completing a major, or even to graduate. A syllabus trigger warning has an element of importance that a movie rating does not. So it needs to be explained.

    I am unaware of any university requiring the use of trigger warnings. And it is not clear how you would mandate their use without also defining what kind of content is offensive. Is it any content with violence? Any content with sex? Would the warnings cover reference to wars – inherently violent - in history courses? Could students complain about a lack of a warning on any topic of their choosing in some university tribunal? That kind of complaint mechanism could lead to de facto censorship by anyone who didn’t like the way a topic was discussed.

    So we might add a proviso to our view that the Chicago dean should have made it clear that he was merely expressing a personal opinion and not a university mandate (if he was just expressing a personal viewpoint). Sometimes, norms can become quasi-mandates without any official policy change. What started out as a fad of adding trigger warnings could become an expectation of more. Harvard law professor Jeannie Suk Gersen noted in a New Yorker article that the discussion in criminal law classes has been impeded by student objections to including as a topic the law of sexual assault.[6] That is, the demand for trigger warnings – which are not mandated by Harvard – has morphed into a demand to omit a section of the curriculum that lawyers are supposed to know something about.

    Perhaps, therefore, the Chicago dean might better have said that while the use of trigger warnings was a matter of instructor discretion, their use did not mean that potentially offensive topics would be avoided or that dealing with such topics is at the option of those enrolled. My guess is that is what he meant to say.

    As far as articulating official policy, he might further have focused on what the University of Chicago is not doing. Chicago is not creating the “bias-response teams” which have embarrassed other universities by censoring courses and speech in response to someone’s complaints of offense. You don’t have to be an expert in organizational behavior to know that such teams, once they are created, will seek out work to do to justify their existence. At least two universities that created such teams have had to disband them after the teams started to do what they shouldn’t do or seemed poised to do so.[7]

    Daum is correct; trigger warnings by themselves are relatively harmless. But if they are used, their implications should be explained to students. And going beyond such warnings in the direction of institutionalized thought police should be avoided. There will always be a few cases of academics who misbehave in some extreme manner – teaching wacko conspiracy theories or whatever. But they can be dealt with in an ad hoc manner. And, if you are wondering about yours truly, I don’t have anything labeled “trigger warnings” on my current syllabus, but I have long provided a very brief description of what each assigned item is about. And it is clear that all the items listed as assignments are required.




    [3] NPR reported on a survey it conducted – which it qualified as nonscientific – in which half of college instructors said they used trigger warnings. That result seems implausible since there are many fields – math, the sciences, computer science, etc. – where (other than the parody of footnote 1) – it’s hard to see how they would be used. See (Would there be a warning for religious fundamentalists that there is scientific evidence of evolution or that there is evidence that the Earth is more than 6,000 years old?)

    [4] However, the Chicago approach was supported by a former Obama administration official:




  • 03 Sep 2016 1:54 PM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 9-5-16: Labor Day Thoughts

    Daniel J.B. Mitchell

    Labor Day – which happens to fall on the date of this musing – traditionally features news articles that rehearse the decline of organized labor and then add some pundit observations as to whether unions will ever “come back.” Possibly, because this is a presidential election year – and because of the Trump candidacy – there will also be some observations about whether international trade and/or immigration is good or bad for labor. “Globalization” will likely be cited.

    There is no doubt that economic forces have a role in providing an understanding of the Trump phenomenon. The standard view seems to be that underlying it all is that there is a segment of the workforce – mainly white males with less than a college degree –  that has been disadvantaged by the decline of good jobs in manufacturing and that the political elite has not responded. So, in this view, Trump supporters have turned to an outsider candidate who promises to do something about it – block illegal immigration, negotiate advantageous trade deals, or whatever.

    At that point, the analysis tends to run two ways. One is that globalization - a loose concept which seems to include both trade and immigration – is an inevitable trend so the Trump supporters have picked an anachronistic cause and a leader who is misleading them. Another is that what’s bugging the Trump supporters is really the loss of good manufacturing jobs to technology – also an inevitable trend – so Trump is deluding his followers by promising to do what he can’t do.[1]

    A variant is that there are things that could be done (and maybe should be done), but the political elites of both parties refuse to do them out of ignorance and/or self-interest. Trump is promising to do something, but actually – if he were elected - he won’t. So, again, his followers are deluded. And the basic cause is economics.

    Still another variant is that Trump has mixed up racist messages with his economic message so that what he says on economic issues, assuming he loses in November as current polls suggest, will be incorrectly discredited. Within this approach, it is possible to pick and choose between immigration and trade as the valid issue which Trump’s defeat will kill.[2] (Of course, current polls could be wrong or what they show could change between now and November.)

    But as noted, there is an assumption throughout most of this type of prognostication that the base of the Trump phenomenon is economics and jobs and that the racial and other “social” messages are a kind of gravy on top of the campaign, albeit unfortunate gravy.

    Now nobody with an economics PhD (such as yours truly) is going to argue that economics is not a factor in the Trump phenomenon. But other issues including the “guns and God” issues that then-candidate Barack Obama made famous (or infamous), may not be just a product of economic concerns. If you look at Big Issues in American history, “social” issues stand out. Yes, before the Civil War, there were tariff disputes (economics) between the North and South. (The North favored high tariffs; the South, low.) But would there really have been a Civil War over tariff levels? Slavery was an economic issue for the South. But for the North, it was a social issue, a moral issue, even a religious issue. The “truth” that John Brown’s body lies a-moldering in the grave for is not lower tariffs.

    Or consider Prohibition. Amending the U.S. constitution is a difficult process and is rarely done. But the issue of Prohibition sparked two amendments, one putting Prohibition into place and the other repealing it. Prohibition was predominantly a social (and religious) issue. It would be hard to argue that the anti-liquor forces had an economic interest in implementing Prohibition. And there was more to Prohibition’s repeal than job creation in breweries.

    Those on the left want to see a linear, economics-based story – and some on the right who are anti-Trump seem to share that desire. Economics (loss of good jobs, etc.) leads to discontent which gets expressed in dysfunctional ways that really run against the interests of those doing the expressing. Trump supporters are deluded in thinking he will fix their jobs problem because a) it can’t be fixed due to inevitable globalization and/or technology or b) if elected, he won’t do the things that would be a fix. Perhaps we should be empathetic to the plight of Trump supporters, in the view of some liberal observers.[3] But empathetic or not, the real story is economics leads to “guns and God” (and racism).

    There is a more nuanced view, however. Perhaps for some – even many – Trump supporters, the key appeal is “guns and God” and economics/jobs is the side show. The Pew poll in the accompanying graphic shows significant overlap in Trump support in all groups except blacks.[4] Trump supporters are not just angry older white males without college degrees who have been displaced from manufacturing. Trump gets support from about a third of women, about a third of the young, and about a third of the college-educated.

    The fact is that even on Labor Day, jobs are not the whole story.



    [2] For an example where immigration is taken as the valid issue, see



  • 27 Aug 2016 6:06 PM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 8-29-16: Not Persuasive

    Daniel J.B. Mitchell

    When LERA, the Labor and Employment Relations Association, changed its name from the Industrial Relations Research Association (IRRA), it was attempting to be more modern in its terminology. Perhaps “industrial” suggested only blue-collar sectors such as manufacturing. Moreover, the terms “labor relations” and “industrial relations” tended in the past to mean only union-management relations and not relations in the much larger nonunion sector.

    Still, in looking for a new name, the Association did not adopt the contemporary “human resources” terminology. Human resources as a phrase – whatever it might mean to those folks who have those words in their job titles (e.g., VP of Human Resources) – doesn’t suggest a relationship. Steel is a resource. Money is a resource. But you don’t have a relationship with either of those “resources.” In particular, there is no need to be persuasive with regard to steel or money; you use them as you see fit without worrying about how they might feel or react.

    It is true that the IRRA, now LERA, developed at a time (the late 1940s) when unions were in a period of ascendency. And it is also true that LERA remains linked to the world of collective bargaining in its structure and interests. However, the idea of a relationship that needed to be studied was always strong. The union sector has in fact two levels of relationships. There is the standard employer/employee relationship with the usual hierarchy of boss/subordinate. But that interpersonal relationship is mediated by the employer-union relationship. Much of the work of the Association was (is) about the complexity of the dual relationships and their interactions.

    When the general public thinks about unions, it is often in the context of conflict (strikes). But a good deal of the research of the Association had to do (has to do) with avoiding or reducing conflict. The study of arbitration and mediation, or just case study comparisons between amicable and hostile relationships, are all examples of such research.

    Of course, there are situations in which people can simply be ordered to do things. As in the “Charge of the Light Brigade”: …Theirs not to reason why; theirs but to do and die  Bu, then again, the Charge of the Light Brigade did not work out so well. Maybe if someone had asked why, the result would have been better. Maybe if the members of the light brigade had been asked about their thoughts on the likelihood of success, things might have been different. In many cases, and certainly in labor and employment relations, some element of consultation and persuasion can produce better results. Providing some legitimacy or rationale through obtaining a “buy in” of participants can reduce conflict.

    Even when conflict arises and third parties are brought in to help, persuasion is an element in the ultimate resolution. Mediators may suggest alternatives or reinterpret positions and outcomes. Arbitrators – even though they ultimately make the decision – try to provide persuasive arguments as to how they arrived at those decisions. Arbitration decisions typically are accompanied by explanations. In those written decisions, they will review what the parties presented and explain their responses. That is, arbitrators will demonstration that they listened, even if they did not agree with the interpretation being proffered. 


    Of course, there remain those conflicts that are settled by infliction of economic pain via strikes and lockouts. Persuasion in those cases – as we have been using that word - is less of an issue. But typically, use of such tactics is seen as a last resort if no other means of resolution seems possible.

    What brought all this to mind was an observation that there is a tendency nowadays to ignore what is persuasive in situations in which, in the end, there is no option to order someone to do something; no option to inflict enough economic pain to make anyone agree.  As an example, I happened to peruse the opinion section of the website for the UCLA undergraduate student newspaper recently and came across the listing below of four opinion pieces:

    Let’s put aside the merits and demerits of the topics of the items. The first one tells somebody what they MUST do or they MUST think. The next two tell somebody what they SHOULD do. And the fourth just expresses an opinion about the subject. Let’s think about those lead-ins.

    The use of MUST would surely be a turn-off to anyone who might actually have some authority to – in this case - change transportation policy in Beverly Hills. So the author either doesn’t actually care if policy is changed or not, or doesn’t realize that starting with MUST is likely to be offensive. In fact, nobody MUST do anything or think anything. Who is the author to say otherwise?

    SHOULD in the middle two items is a bit softer than MUST. But it’s still pretty directive and a potential turn-off. Before the reader even gets into the argument to be presented, he/she is told what he/she SHOULD think. Why is the reader being told in advance of any rationale what he/she as a voter or a college student SHOULD do?

    The fourth item simply gives an opinion and invites you, as the reader, to find out what it is that the author believes. It doesn’t tell you what you should or must do. Of course, I have no way of knowing whether the authors of any of these pieces thought about being persuasive or about what form of presentation might be most persuasive. My experience, however, in teaching undergraduates is that they haven’t had much experience in persuasive policy writing.

    But it isn’t just youthful undergraduates nowadays who seem intent on expressing opinions without regard to persuasiveness. My high school class (of 1960) maintains a website whereby members of the class can post whatever they like. It was intended for recollections of the school, updates on careers, personal items, etc. At present, however, it is dominated by a few individuals who post – sometimes in vituperative terms – their opinions about the current presidential election. There are pro-Trump and pro-Clinton views and views that both candidates are terrible. And there are endless back-and-forths refuting each other, sometimes with lengthy essays that others in the class have no interest in reading. Although there have been suggestions by others who look at the website (including suggestions by yours truly) to the offenders that enough is enough, their “debate” continues. And it continues without apparent interest in putting forward positions in ways that will engage rather than offend.

    So expressions of opinion without apparent thought of persuasiveness seem to be found at both ends of the age spectrum, undergrads and folks in their seventies. Sadly, the practice is not found only at the extremes of the age distribution. You have only to look at the comment sections on newspaper websites, you find the similar results – presumably mainly reflecting the ages somewhere in between current undergrads and the high school class of 1960. Opinions in the comment sections are also commonly expressed without any apparent interest in persuading readers of their validity. Even the basics of spelling and grammar are absent, despite the ready availability of automatic spellcheckers on computers.

    As with the student examples, I have no way of knowing whether the authors of newspaper comments know about, or care about, persuasive presentation. LERA members – by virtue of their field of study – presumably do care since persuasion is an inherent element in labor and employment relations. Perhaps it’s time at some future LERA meeting for a session on the art of persuasion and on why it seems to be becoming a lost art.

  • 20 Aug 2016 12:18 PM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 8-22-16: A Clue from the Wages Fund

    Daniel J.B. Mitchell

    An important element in the current presidential campaign is wage stagnation and the potential role of trade competition in retarding wage growth. So an interesting question is how much higher real wages might be if policy on trade or whatever you might think was depressing wages was changed. The question is a bit like one we asked about manufacturing jobs we reviewed in prior musings. In that case, we asked how much larger that sector could be if the trade deficit were eliminated.

    The answer we found in the manufacturing case might be described as “somewhat,” but not a dramatic return to the kind of manufacturing share of jobs that existed in, say, the 1950s. Note that the answer being only “somewhat” (and not “huge”) is not a reason to do nothing. Indeed, I have urged that there should be a policy in place to get to balanced international trade. But what the “somewhat” answer means is that there are limits to what the effect of a policy shift might be.

    What about real wages? It might not surprise you that a back-of-the-envelope calculation also suggests a “somewhat” type answer. But that the “somewhat” in the wage case comes from a variant of the “wages fund” doctrine of the 19th century (and even earlier) might be a surprise. The old wages fund doctrine relates to a supposed constancy of labor’s “share” – the dollars going to labor in the form of wages and benefits – as a proportion of national income.

    Despite the wages fund doctrine, there really isn’t a theoretical reason why the share of labor has to be constant. And there is some cyclical variation in the share. Empirically, the share doesn’t literally stay constant, even adjusting for the business cycle. But it does change slowly over time. The table below shows the share and the ratio of employment to population, both in percentage terms.[1] Years shown on the table roughly are business cycle peaks. (The year 2015 – the last full year available – was not a peak; we, of course, don’t know when the next peak will occur.)

          Labor’s     Employment-

          Share of    to-

          National    Population

          Income      Ratio


    1949     60.2%         55.4%

    1959     62.3          56.0

    1969     65.1          58.0

    1979     65.9          59.9

    1990     66.4          62.8

    2000     65.8          64.4

    2007     64.1          63.0

    2015     61.9          59.3


    Generally, labor’s share rose until 2000 and declined thereafter. Although there is not a simple, mechanical link evident in the data, the working population (represented by the employment-to-population ratio) also rose until 2000 and then fell. Possibly, there is some connection between those two trends. Perhaps growth in the labor force was a factor in enlarging the share.

    If you think that, say, international trade was squeezing the share of labor compensation, and if you think you have a policy that could offset that squeeze effect, how much would wages rise with an implementation of your policy?[2] A simple answer might be developed from the observation that at its peak, labor’s relative share was about 66% of national income, and now (2015) it’s about 62%. If you were to raise the share back up to 66% - that is, push it up by 4 percentage points – and if the number of workers receiving the share remained unchanged, dollars per worker would rise by something like 4/62 or 6.5%.[3] In magnitude, that’s a “somewhat” answer.

    If you think the proportion of the population working has something to do with the size of the share, you might note that the last time the employment-to-population ratio was in its current range of around 59+ percent was the 1970s. And back then, labor’s share was about 66%. So, even standardizing for employment, you’ll still get 4/62 or 6.5%.

    Of course, even if the average wage/employee rose by something like 6-7%, that gain says nothing about how the dollars would be distributed, demographically or occupationally, within the workforce. That is, some groups might be benefited by an average wage increase more than others. Overall, however, the effect is modest. Still, no one would turn down a pay raise, even if it isn’t huge.

    There are other candidates than international trade for being the cause of wage stagnation such as the decline of private-sector unionization, slippage in the minimum wage, and/or “technology.”

    If you think the rise in wages would pull more folks into the workforce, i.e., that the labor supply curve has a positive slope, the gain in wages might be reduced a bit. Or if you think there is a backward-bending curve (with a negative slope), the gain might be a bit more. But for back-of-the-envelope purposes, what you assume about supply is not going to matter much.


    [1] Labor’s share data are from the U.S. Bureau of Economic Analysis national income account Table 1.12. Employment data are from the U.S. Bureau of Labor Statistics.

    [2] There are other candidates than international trade for being the cause of wage stagnation such as the decline of private-sector unionization, slippage in the minimum wage, and/or “technology.” 

    [3] If you think the rise in wages would pull more folks into the workforce, i.e., that the labor supply curve has a positive slope, the gain in wages might be reduced a bit. Or if you think there is a backward-bending curve (with a negative slope), the gain might be a bit more. But for back-of-the-envelope purposes, what you assume about supply is not going to matter much.

  • 13 Aug 2016 2:30 PM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 8-15-16: Not to the Swift

    Daniel J.B. Mitchell

    I returned, and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all.

    Ecclesiastes 9:11

    A profound thought in that quote. It has something to do with accidents of life and uncertainty of outcomes and experiences. When I was a senior in high school – Stuyvesant High School in New York City – I got a job at the Swift Messenger Service – by accident. One of the other boys at the school told me about the job. Stuyvesant at the time was an all-male school requiring an entrance exam to get in and was located in 15th Street west of First Avenue.[1] The Swift Messenger Service was located just west of Fifth Avenue on 47th Street, which is (was) in the heart of the Diamond District.[2]

    To get the job, I simply accompanied the guy who told me about the job to the office, spoke to the owner/boss who hired me on the spot and told me to get the necessary New York State working papers, which I did. There was nothing that could be called an “interview.” I walked in on my own two feet which qualified me to be a messenger boy – since the job mainly involved walking around midtown Manhattan.

    There still are messenger services in Manhattan. But based on a Google search, only a few. (Uber seems to have one.) There were more such enterprises back then (1960) before the invention of the fax machine, email, etc. Many of the clients of the Swift Messenger Service were advertising agencies. The agencies had their own messengers on payroll, but there was a peak period in the late afternoon when apparently a lot of last minute material had to be picked up and delivered. The late afternoon corresponded to the time when high school kids were available since school let out at 3 pm.

    In some cases, there appeared to be an ongoing business relationship with particular agencies and there were dedicated phone lines by which they could call Swift. But a lot of the business came from sporadic callers. The sporadic callers had the idea that when they called for a messenger, he would pick up their package and deliver it directly to where they wanted it sent. “It” typically was a manila envelope. If they were in a hurry, they could pay extra for “rush” service.

    Actually, the Swift Messenger Service ran like a post office. Messengers were sent out on routes to pick up packages from addresses in particular areas. All of the packages were brought back to the office on 47th Street; they were not directly delivered. When a pile of packages accumulated in the central office, delivery routes to particular areas were made up and boys were sent out again. The only thing that rush service seemed to buy was a label on the package that said “rush.”

    Messengers would arrive from school around 3:30 pm and sit on a bench in the office. As orders came in, boys would be dispatched on routes. Messengers were paid the minimum wage, then $1 per hour. But the “clock” didn’t start when you came in. It started when you were dispatched. So if business was slow, you would be on the bench longer and weren’t paid for the waiting time.

    Before coming back to the office after completing pick-ups or deliveries, you were required to phone in from a pay phone (there was no shortage of pay phones back then). Calls cost a dime for which you were reimbursed. But you were supposed to make sure you had a dime for the call with you. When you called in, you might be told to go to some new addresses for pick-ups near to where you were that had been requested during your travels.

    The office on 47th Street was staffed by the owner/boss plus a couple of other adult workers. Owner/boss sometimes took the day off and went to the races – or somewhere – leaving the others in charge. There were also one or two adult messengers who appeared to be “cognitively impaired” (if that is the correct term nowadays) and seemed to be used for special trips, such as overnight trips to Philadelphia.

    OK. You now have a basic outline of the enterprise and its practices. But note that it is a rich source of labor market anecdotes and issues, some of which I used to cite in my labor markets class.

    Let’s start with hiring. Note that there was no formal posting or advertising of jobs. It was all word of mouth. And incumbent workers just brought in friends when there were vacancies. In more recent years, this type of recruitment via network has been particularly identified with immigrant labor markets. But obviously it has existed for a long time. In my own case, I wasn’t particularly looking for an after-school job. The potential workforce is typically divided between 1) the employed, 2) those looking for work but without jobs (the unemployed), and 3) those out of the labor force (and not actively looking for work). Studies indicate that jumps from the third status to the first, i.e., recruitment of people into employment among folks who were not actively seeking a job, are common.

    Another point to note is that all the messengers were boys. Was that because the network was linked to an all-boys high school? Or was it that girls in 1960 would not have wanted to be messengers? Or that the Swift Messenger Service just didn’t hire females? (None of the adult workers were women.) Interesting questions. There were no laws at the time that would have prevented an employer from discriminating on the basis of sex. If you were to look at help-wanted ads in the newspapers of that era, you would find male and female jobs listed separately.[3]

    What about the lack of a real interview? Since the job really required very little that could be called “skill,” that lack wasn’t surprising. If you could read an address and walk to it, or on rare occasions take a bus or subway to it, you were “qualified.” So there was little risk to the employer of a bad hire. If you nonetheless turned out to be a bad hire, you could be fired. No big deal. And the fact that an incumbent worker – who was presumably OK or he wouldn’t have been incumbent – brought you, was a kind of screening. Why would he want to bring someone who wouldn’t work out?

    The pay system also has some lessons. First, there is some question about the process of having workers sit on a bench unpaid until dispatched. It at least skirts on the edge of illegality. A lawyer would probably want to examine the degree to which you were required to arrive by a certain time, even if no work was available. But note that if the practice was illegal, i.e., you should have been paid from the time of arrival, who was going to enforce the law? The amount that would have been due to any particular worker would have been trivial. And workers would have had to file complaints – assuming they knew about the law and where complaints had to be filed.

    Moreover, the Swift Messenger Service was a small business. So if the practice was illegal, there was at least a good chance the owner/boss may not have known that it was. There were no lawyers on staff. It is one thing to enact labor laws; it is another to enforce them.

    Second, you were paid on a time wage, $1 per hour. When I first got the job, the boy who recruited me took me aside and the following dialog ensued:

    Mitchell, do you know what the motto is of a Swift Messenger Boy?


    Don’t be Swift!

    What we have here is a classic principal/agent problem associated with paying by time. The faster you accomplished your task, the less you were paid for it. So there is an incentive not to be swift. One remedy used in some jobs for this problem is to pay by the task, not by the time. But in the case of messenger boys, the task varied from assignment to assignment. Moreover, one could form at least a rough idea how long it would take to walk to the various addresses and then phone in. So “too much” dawdling would have become apparent. Even without detailed monitoring, while there was no incentive for messengers to be superfast, a snail’s pace would have been detected.

    The practice of reimbursing dimes for phone calls also exposed an agency problem. There was a draw full of dimes used to reimburse the messengers. And there was what nowadays might be called a problem of “corporate culture.” The adult workers in the office hated the owner/boss. When he went off to the races or wherever, leaving them in charge, they would “reimburse” us for dimes we hadn’t spent. Giving away the owner’s money gave them pleasure. We didn’t complain. When you make $1 per hour, an extra dime is 10% of your hourly wage.

    And talking about being reimbursed for dimes we hadn’t spent, there was yet another perversity entailed in the practice. But it was a negative externality borne by the telephone company, not the Swift Messenger Service. Word got around the messenger boys that you could make calls from payphones without a dime.

    What you needed was an uncurled paper clip. You would shove one end into one of the little holes on the microphone and touch the other and to the body of the phone. A contact would be made and you would get a dial tone. You could then call into the office as required and get reimbursed for the 10 cents that you hadn’t paid. Of course, that was 10 cents less for the phone company. Moreover repeated sticks of the paper clip into a phone’s microphone damaged it, worsening the sound quality until finally it was unusable. The phone company was aware of the problem and was in the midst of replacing phones – or at least phone microphones – with ones where the trick wouldn’t work. But the messenger boys passed word around about where there were still phones in operation which were vulnerable.

    Apart from the labor market lessons embedded in this tale, there is also one more about misinformation in the marketplace that is more general. Remember the folks who believed that the messenger they summoned was taking their package directly to its destination (and not back to the central office)? Or those who paid extra for “rush” service that they didn’t get? The lesson is clear: Caveat emptor.


    [1] The school became coed in 1969 and is now located near the World Trade Center. (It was closed for an extended period after 9-11 during the clean-up.)  


    [3]I used to run a video in class illustrating attitudes toward working women in the 1950s:  

  • 04 Aug 2016 2:05 PM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 8-8-16: It’s no longer 1944 or 1968 or 1971 or 1973

    Daniel J.B. Mitchell

    I happen to be vacationing as this musing is written at the Mount Washington Hotel in Bretton Woods, New Hampshire. That’s right; it’s the very place where the 1944 Bretton Woods monetary conference took place, the conference that created the International Monetary Fund (IMF) and the World Bank.[1] The World Bank was established to provide loans for the reconstruction of Europe after World War II and later for development loans to third world countries. Although World War II reconstruction was accomplished long ago, there are still third world countries. So arguably the World Bank still has a well-defined function. But it’s harder to make that statement about the IMF.

    The function of the IMF, as seen in 1944, was to oversee a new postwar international monetary system – the Bretton Woods system. Note that in 1944, World War II was still in progress in both the European and Pacific theaters. So the system being proposed was something that did not then exist. If there was ever an example of economic planning, the Bretton Woods system was it. Although many countries sent representatives to the Bretton Woods conference, the ultimate plan was largely a matter worked out between Britain (which had been the center of world finance historically) and the U.S. (which was to be the center after the War).

    Although the Bretton Woods system was new, it was built on elements of the old order – primarily fixed exchange rates and elements of the old gold standard. But it was infused with the idea that the postwar world should have institutions of international cooperation to avoid World War III. For example, the failed League of Nations was to be replaced by the United Nations. On the other hand, part of the reason that the League had failed was isolationism in the U.S. after World War I. So much was done to make the new institutions palatable to U.S. politics.

    It’s not an accident that the UN, the IMF, and the World Bank are all located within the borders of the U.S. And it’s not an accident that the two financial institutions were to be located in Washington, DC and not New York City. American politics of that era had an anti-banking tilt, so keeping the financial institutions in DC was a statement that they could be watched by the federal government and not be captured by Wall Street. Back in those days, the federal government enjoyed a higher degree of popular trust than today.

    The two key players in creation of the Bretton Woods plan – John Maynard Keynes for Britain and Harry Dexter White, a U.S. Treasury official – did not see gold as a necessary component of a fixed exchange rate regime. (And it isn’t.) But because gold had historically played a major role in the international monetary system, it was continued as a kind of third wheel of the Bretton Woods system. Keynes wanted the IMF to be a de facto world central bank with the ability to create money.  But American politics would not allow creation of new financial institution with that much authority. So the IMF ended up in 1944 as a world savings and loan, in effect borrowing and lending funds, but not creating money. And there was a cumbersome arrangement of quotas (essentially deposits) assigned for each member country. Apart from their monetary aspect, the quotas were also votes in the new organization and they were originally specified so as to guarantee the U.S. and Britain effective control.

    This musing is not the place for a detailed history of the Bretton Woods system in actual practice. But let’s just say it never quite worked as planned. The new fixed exchange rates that were set overvalued major currencies relative to the U.S. dollar creating an initial “dollar shortage” after the War. By the late 1950s, exchange rate crises and adjustments led to the opposite condition: a dollar surplus. Much of U.S. economic policy in the 1960s revolved around the American “balance of payments problem.”[2] By 1968, the U.S. stopped trying to maintain the market price of gold at the prewar $35/ounce price. By 1971, after a series of currency crises involving the dollar, President Nixon essentially ended the Bretton Woods system.[3] A quickly modified fixed exchange rate arrangement – the Smithsonian system - lasted only until 1973.[4] Thereafter, the U.S. ended the Smithsonian regime and the world moved to flexible exchange rates.

    From 1973 onwards, international monetary arrangements were regional affairs, the euro-zone being the prime example. So the Bretton Woods plan the IMF was created to oversee ended in the early 1970s. Yet the IMF continues to exist. If you ever need an example illustrating that institutions are easier to create than terminate, the IMF has to be it.

    The years 1971 and 1973 (whichever you prefer as the date of the end of the IMF’s function) came and went over four decades ago. The function planned for the IMF at Bretton Woods disappeared less than three decades after the 1944 conference. So one might say that the IMF has been around without a function longer than it existed with a function.

    Of course, many folks in international circles would argue with my unkind characterization and would point to the IMF as a forum (like the UN) for discussion and a forum for research and data gathering (also like the UN). But here’s the thing. In the current U.S. election campaign, the issue of exchange rate manipulation has come to the fore, both in the (now-failed) Bernie Sanders campaign and the current Donald Trump campaign (we don’t know the outcome of that). The prime example of regional international monetary cooperation – the euro-zone – has been beset with problems. And when the IMF has gotten involved with local currency matters, it has become identified with “austerity” policies. Whatever you might say about austerity prescriptions, no one in 1944 had such a role for the IMF in mind.

    Given the developments in world monetary affairs since the early 1970s, and given the fact that the IMF seems destined to continue as an organization indefinitely despite its loss of original function, wouldn’t a new Bretton Woods-style conference – not necessarily in New Hampshire - be in order? From 1944 until the early 1970s, there were lessons. We learned that rigid fixed exchange rates present major problems of maintenance and are difficult to sustain. And we learned that trying to tie such a system to gold just adds to the inherent problems. From the early 1970s until the present, we learned that flexible exchange rates with no effective rules as to how they operate also present major problems. What seems to be called for is an agreement on a flexible system, but with agreed-upon rules and an effective set of regulations and regulator. In particular, the issue of currency manipulation – which is now a feature in U.S. presidential politics - needs to be addressed.





    [4] and  


12/30/13 12/24/12 12/26/11
12/23/13 12/17/12 12/19/11
12/15/13 12/10/12 12/12/11
12/9/13 12/1/12 12/5/11
12/2/13 11/26/12 11/28/11
11/25/13 11/19/12 11/21/11
11/18/13 11/12/12 11/7/11
11/11/13 11/5/12 10/31/11
11/4/13 10/29/12 10/24/11
10/28/13 10/22/12 10/17/11
10/21/13 10/15/12 10/10/11
10/14/13 10/8/12 10/3/11
10/7/13 10/1/12 9/26/11
9/30/13 9/24/12 9/19/11
9/22/14 9/23/13 9/17/12 9/12/11
 9/14/159/15/14 9/16/13 9/10/12 9/5/11
9/8/14 9/9/13 9/3/12 8/29/11
9/1/14 9/2/13 8/27/12 8/22/11
8/25/14 8/26/13 8/20/12 8/15/11
8/18/14 8/19/13 8/13/12 8/8/11
8/11/14 8/12/13 8/6/12 7/25/11
8/4/2014 8/5/13 7/30/12 7/18/11
7/28/14 7/29/13 7/23/12 7/11/11
7/21/14 7/22/13 7/16/12 7/4/11
7/14/14 7/15/13 7/9/12 6/20/11
7/7/14 7/8/13 7/2/12 6/13/11
6/30/14 7/1/13 6/25/12 6/6/11
6/23/14 6/24/13 6/18/12 5/30/11
6/16/14 6/17/13 6/11/12 5/23/11
6/9/14 6/10/13 6/4/12 5/16/11
6/2/14 6/3/13 5/28/12 5/9/11

5/27/13 5/21/12 5/2/11

5/20/13 5/14/12 4/25/11

5/13/13 5/7/12 4/18/11

5/6/13 4/30/12 4/11/11

4/29/13 4/23/12 4/4/11

4/22/13 4/16/12 3/28/11

4/15/13 4/9/12 3/21/11

4/8/13 4/2/12 3/14/11

4/1/13 3/26/12 3/7/11

3/25/13 3/19/12 2/28/11

3/18/13 3/12/12 2/21/11

3/11/13 3/5/12 2/14/11

3/4/13 2/27/12 2/7/11

2/25/13 2/20/12 1/31/11
 2/16/152/17/14 2/18/13 2/13/12 1/24/11
2/10/14 2/11/13 2/6/12 12/15/10
2/3/14 2/4/13 1/30/12 12/9/10
1/27/14 1/28/13 1/23/12
1/20/14 1/21/13 1/16/12
1/13/14 1/14/13 1/9/12
1/6/14 1/7/13 1/2/12
Employment Policy Research Network (A member-driven project of the Labor and Employment Relations Association)

121 Labor and Employment Relations Bldg.


121 LER Building

504 East Armory Ave.

Champaign, IL 61820


The EPRN began with generous grants from the Rockefeller, Russell Sage, and Ewing Marion Kauffman Foundations


Powered by Wild Apricot Membership Software