Mitchell's Musings

  • 12 Nov 2016 11:02 AM | Daniel Mitchell (Administrator)

    Note: Somehow, this item from Oct. 10, 2016 was either not posted or unposted. Here it is, out of sequence:

    Mitchell’s Musings 10-10-16: Makes Sense to Me

    Daniel J.B. Mitchell

    I came across an article in the Los Angeles Times business section recently with the print-version headline “Trade is seen as harmful.”[1] The article noted that a Pew Research Center poll found that “eight out of ten adults regarded outsourcing of jobs overseas and the growth of imports of foreign-made goods as harmful to U.S. workers. By comparison only half of the people surveyed saw automation as hurtful – even though many economists believe that new technologies and the mechanization of work have led to as many job losses as imbalanced trade.” The article went on to relate the trade concerns to the current presidential election campaign.

    Above are the actual poll results from the underlying Pew study.[2] Although the LA Times article seems to imply that respondents have misdiagnosed the relative importance of trade vs. automation, there may be an explanation. Unfortunately, Pew does not provide the precise wording of the questions asked. So exactly how respondents interpreted words such as “outsourcing” and “automation” is not clear. Were they given definitions? The hurt vs. help contrast also raises some questions. Do the words mean destroy jobs vs. create jobs? Or do the terms suggest making jobs more difficult to do vs. assisting workers to do their jobs? Or do the terms suggest some kind of good jobs vs. bad jobs distinction?[3] The LA Times article implicitly assumes the destruction/creation interpretation.

    Despite the ambiguities, let’s limit the interpretation to destruction vs. creation when thinking about the trade issue and how survey respondents reacted. Other things equal, the large trade imbalance (deficit) that has characterized the U.S. for decades has to be a source of net destruction. There are caveats, of course. Important among them is the fact that workers whose jobs are lost may end up in the non-trade sector (such as retail).[4] That is, imbalanced trade may shift the mix of jobs towards the non-trade sector without changing the total number of jobs.

    What about automation as a concern? Note that the LA Times article seems to use automation and technology interchangeably. If that is also how respondents reacted, at least some of them may have been thinking about the way technology provides an assist to workers in doing jobs. Thanks to computer technology, for example, it is easier to access information needed on the job than it used to be. Medical records are now available readily without going through paper files. There are many such examples.

    Finally, what about immigration? Respondents are roughly split on whether immigration hurts or helps American workers. The LA Times article notes that ten years ago, the result was much more anti-immigrant than it is now. Although the pace of illegal immigration is hard to measure precisely, another report from Pew suggests that during the past decade, net illegal immigration has halted.[5] That is, the absolute of number of illegal immigrants residing within the U.S. has stayed about constant, as shown on the chart below. Presumably, the Great Recession and its aftermath had an impact in discouraging a net inflow; up until the Great Recession occurred, the number had been rising. That shift to a net of zero may explain the attitudinal change.[6]

    In short, the Pew survey results do not seem to be counter-intuitive. They seem to go with the election narrative. Reflected in the outcome, you have the followers of Bernie Sanders who think trade is a problem but not immigration vs. the followers of Donald Trump who think both are a problem.  You have automation-technology seen as good and bad, perhaps because the question is posed ambiguously. It makes sense to me.


    [1] The online version of the article had a different headline focused on immigration, not trade: “Americans are feeling better about immigrants' economic effect — but Republicans aren't, survey shows.” See  

    [2] The full report is at A summary is at  

    [3] The fact that decline of unions is seen as more hurtful than helpful in the survey suggests that respondents may have been thinking – at least in part – about job characteristics such as pay and benefits.

    [4] Exports and imports may differ in their labor-using characteristics. At least in theory, however, U.S. imports should be more labor-intensive than exports which would intensify the net destruction effect. That is, even with balanced trade, there might be net displacement.

    [5] Even when the stock of illegal immigrants is constant, there may be gross flows into and out of the U.S. But the inflows and outflows must be balanced, i.e., a net of zero, for the total stock to stay the same.

    [6] Immigration has had a U-shape in terms of skill with a concentration of unskilled immigrants and a lesser concentration of highly-skilled immigrants. Since this pattern is not a replica of the existing U.S. workforce, there may be both competition among substitutes (e.g., low-skilled natives vs. low-skilled immigrants) as well as complements, e.g., natives for whom demand for their labor is enhanced by the presence of immigrants. For example, native supervisors in southern poultry packing plants may benefit from the influx of low-skilled production workers. It seems unlikely, however, that this nuanced view of the impact of immigration across different groups within the workforce is driving the survey results.

  • 11 Nov 2016 12:57 PM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 11-14-16: Do We Really Have All Our Marbles?

    Daniel J.B. Mitchell

    When I was in high school a long time ago, a teacher wanted to demonstrate the laws of probability and the concept of sampling. So he filled a large jar with marbles of four colors. Since he had filled the jar himself, he knew the proportions of each color which he told the class. He then chose various students to come up and – without looking in the jar – randomly pick out a couple of marbles. The idea was to show that as the sample increased in size, it would come to reflect the actual proportions in the jar.

    Now this demonstration took place in Stuyvesant High School in New York City, a special school – you had to take an SAT-type test to get in - which at the time was male-only. And the teacher was a patsy. So the nerdy teenage boys in the class managed to sabotage the demonstration. I won’t go into more details on what happened, but let’s suppose that the demonstration had proceeded as the teacher had planned. Indeed, as the sample size grew larger, the proportions in the sample would have come to resemble the actual proportions in the entire jar.

    I thought of the marbles-in-the-jar story given the polling fiasco that occurred in the presidential election last week. Basically, pollsters (the polling industry) had a Dewey-Beats-Truman event in 2016, whatever rationalizations and defenses they offer.  And there will be rationalizations offered in the weeks to come. Examples: Clinton won the popular vote so really those who projected a Clinton win were “right.” Voters changed their minds at the last minute. Etc., etc. You can just hear the self-justifying stories now.

    But there is a problem. Pollsters usually qualify their results by including a “margin of error.” Journalists take this margin to mean that there is zero probability of an error outside that range. But presumably “margin of error” actually has some relationship (what relationship exactly?) to the statistical concept of a confidence interval linked to sample size. A confidence interval, however it is defined, does not mean that there is zero probability of a result outside its boundaries.

    That issue is a comparatively minor part of the larger story. So let’s go back to the marble story. There is no such thing as a “likely marble.” If you took the marble out of the jar, its status is clear; it was previously a marble in the jar.  But there is the concept of a “likely voter.” When you poll people, you have to decide what filters to apply in order to count people who will actually vote. Whatever you do about that issue, you may be wrong. The fact that you may be wrong potentially adds to the actual “margin of error,” but not to the reported one; you can’t treat your sample as if it were marbles in a jar.

    Marbles in a jar don’t change their colors. No matter how you ask the question about what proportions are in the jar, the marbles and their colors are unaltered. But we know from years of research that the way a question is framed when people – not marbles – are surveyed, can influence the answer you obtain. So pollsters (hopefully) try to take account of that issue. But the way in which you take account potentially increases the “margin of error.” You might be wrong.

    The same is true with regard to response bias. You don’t have to worry about marbles refusing to come out of the jar. But no one has to answer the phone. No one who does answer the phone has to volunteer to be questioned. And there may be systemic links between responding to a pollster and political inclinations. Like the problem of question framing, you can add methodological steps to try and account for response bias. But you might be wrong and thus potentially add to your “margin of error.”

    One way to try to deal with the issue of a wide margin of error in any particular poll is to average a group of polls. If you took several separate samples out of the marble jar, in effect you would have produced a larger sample and a smaller confidence interval as a result. But pollsters asking about voting behavior using different filters and different questions are not the same as repeat samples of marbles from the jar. Pollsters may influence each other. If you see your poll is an outlier, you may modify your methodology to make it more like the consensus. So the unintended biases in one poll may induce biases in another.

    Anyway, exactly how do you average different polls? Do you weight different polls differently? What weights? Even using a simple (unweighted) average is a de facto decision about weights. The bottom line here is that polling is not the same as pulling marbles from a jar; “margins of error” calculated as if it were the same are generally going to be too small.

    What about using non-poll models based on economic variables? Note that the sample size of relevant elections is small; we have presidential elections only once every four years. And economic models that correctly predict the popular vote can be wrong about the Electoral College outcome, as in Bush vs. Gore in 2000 or Trump vs. Clinton in 2016. (If your economic model of the popular vote predicted Trump, you were wrong, even though you were right.)

    Generally, economic models tend to be estimated after some methodological massaging has occurred to find the best “fit” to past election data. Confidence in results has to go down as you keep revising your methodology to improve “fit.” Ultimately, a crooked enough line can fit any constellation of points but may not be good at predicting the next point in the series. Even if you buy the notion that “it’s the economy, stupid,” that view doesn’t tell you how to measure “the economy.” Real GDP? Unemployment? Trends vs. absolute values? There is nothing in the general idea that “the economy” is key to voter behavior that tells you exactly what the economy is.

    If you are really an economic determinist, of course, you face a paradox. Why should the party which the model predicts will lose put up a candidate at all? Why go through the expense of a campaign? Presumably, the answer is that there is some probability (“margin of error”) that the model’s forecast is wrong. So there must be possible things the disadvantaged campaign can do to reverse the predicted result. And if there are such things, surely some of them are non-economic. Maybe the observation – promoted by Democrats - that Dewey looked like the man on a wedding cake had some effect!

    Bottom line: When Dewey didn’t beat Truman in 1948, folks remembered that lesson for a while. But people also forget over time. And it is always possible to assert that nowadays we have new sophisticated high-tech methodology that didn’t exist back in the 1940s. (“This time it’s different” is not an idea confined to financial bubbles.) In fact, new technology does not always guarantee better results. Think about people switching from landlines to cellphones, for example, and what that may do to sampling. Do we even know?

    Be skeptical.

  • 04 Nov 2016 10:32 AM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 11-7-16: Disunion

    Daniel J.B. Mitchell

    Much has been said about the base of support for the Trump candidacy, usually depicted as disaffected white males with less than a college degree. We have noted in past musings that there are plenty of Trump supporters who don’t fit the stereotype. The latest Field Poll data for California – a decidedly “blue” state – show among “likely voters” that 28% of those with a college degree are Trump supporters as are 22% among those with postgraduate work (more than a college degree). Twenty-four percent of Latinos are supporting Trump, despite the notion that he alienated that group with anti-immigrant rhetoric. Among those 18-39 years old, he has 17% support, despite the idea that the young are inherently liberal. So the Trump story is not exclusively one of support among the stereotyped base by any means.[1] The base alone would not produce the kind of polling numbers Trump has been receiving.

    Nonetheless, it is worth looking at the stereotyped group. And particularly for LERA readers, it is worth asking whether the decline in unionization has something to do with the fact that this group is looking for someone to give it political representation. In a sense, you can view the declining-union idea as a “bowling alone” story.[2] Unions once served the group as a form of representation – not only at the workplace, but more generally in the American polity. Now, after a long period of union decline within the group’s primary employment sectors, unions represent only a small fraction of the group.

    Neither political party, it can be argued, has been energetic in finding actual economic remedies for the group. But Democrats have largely continued to do so indirectly via their historical connection to unions, unions that have less and less contact within the group. Republicans have made direct appeals – not with economic remedies, but with “social” issues. So what has occurred is the Obama “guns and God” story combined with bowling alone.[3] It’s true that in the contemporary era, folks can represent themselves and interact via internet social networks. But individual self-expression is not effective group representation.

    Unions never represented a majority of the U.S. workforce or even close to it. But in, say, the 1950s, something like a third of nonfarm workers were union members.[4] The representation rate was uneven, more in some regions and industries than others. But if you were a blue collar, white male, there was a good chance you were a union member.

    We don’t have detailed demographics from the 1950s “golden age” of unionization. But the U.S. Bureau of Labor Statistics (BLS) did do a study, based on Current Population survey data, for 1970 (so a decade or more into the decline).[5] Nowadays, the stereotypical union worker is in the public sector, even though there are still more private than public union members.[6] (In 2015, there were 7.2 million public sector union members vs. 7.6 million private members.)

    In 1970, 1.1 million union members were in “public administration” out of a total of 17.2 million members in total.[7] (Note that the absolute number of union members was larger in 1970 than today despite substantial growth since 1970 in the workforce.) Moreover, in 1970 (with private unionization rates already in decline but public rates increasing), the average union membership rate in each of the two sectors was about the same. Nowadays, the overall unionization membership rate in the public sector is 35.2% versus a mere 6.7% in the private sector.

    Male Union Members as Percent of Male Wage and Salary Workers: 1970

                      Total            White         “Negro & Other”


    All               27.8%            27.6%              29.0%

    Agriculture        2.7              2.9                2.2

    Mining            38.9             38.7                  *

    Construction      41.3             42.2               33.7

    Manufacturing     38.4             37.8               43.7


     & Public

     Utilities        49.4             50.1               43.6

    Wholesale &

     Retail           12.9             12.8               13.9

    Services &

     Finance          11.7             11.2               15.6


     Administration   27.8             27.0               33.4


    White Collar      12.5             12.0               20.7

    Blue Collar       42.1             42.8               36.7

    Service           20.1             20.2               19.9


    *Base too small for an estimate.

    The table above summarizes the BLS estimates of unionization membership rates for males in 1970. To anyone familiar with contemporary data on unionization, the figures on the table are jarringly different from what prevails today. We can debate the long-term causes of union decline. And you don’t have to be a romantic about the role of unions in society and the economy, as some in academia are. But the simple fact is that over time, a sizable group in the U.S. population that was once represented in society has lost a major channel of voice, both in the workplace and in the broader sphere of economic policy. A vacuum was created and politics abhors a vacuum. So someone moved to fill the void and now there are consequences.



    [2] “Bowling alone” refers to the Robert Putnam idea that social and community institutions (including unions) have been in decline.


    [4] Gerald Mayer, “Union Membership Trends in the United States,” Congressional Research Service, 2004. Available at

    [5] U.S. Bureau of Labor Statistics, “Selected Earnings and Demographic Characteristics of Union Members, 1970,” U.S. GPO, 1972.


    [7] Public administration does not cover the entire public sector. Some government-run employment sources (transit enterprises, etc.) are lumped in with other private workers. 

  • 29 Oct 2016 2:14 PM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 10-31-16: Sitting Out the Election

    Daniel J.B. Mitchell

    The Federal Reserve, on and off during the 2016 presidential election campaign, has been accused of following a policy aimed at helping Hillary Clinton by keeping the economy artificially afloat. So what has been the recent history of Fed policy, particularly during the election cycle? It’s actually not all that hard to interpret the available data.

    The chart above shows the “adjusted monetary base,” essentially the credit injected by the Fed into the broader economy.[1] What we observe is a remarkable break from the past trend in the midst of the Great Recession and then various surges and pauses thereafter. Particularly in a period in which short-term interest rates were stuck near nominal zero, it makes more sense to judge monetary policy by what happened to the base than by changes in interest rates.

    There are – as can be readily seen – distinct periods of monetary policy depicted on the chart. It is, of course, interesting to analyze the details of who said what at meetings of the Open Market Committee, and what outside observers were citing as important developments at those times. But the chart provides us with an after-the-fact general overview that cuts through the noise. Below is a listing of what we can see, combined with labor market conditions as measured by unemployment:

    • Sept. 2008 – Jan. 2009: Surge. Unemployment rate rises from 6.1% to 7.8%, seasonally adjusted. This period contains the 2008 election and a general collapse of the financial sector which was rescued through various “bailout” policies.
    • Jan. 2009 – Aug. 2009: Pause: Unemployment rate rises from 7.8% to 9.6%. New president takes office. The Fed seems to want to see what the impact of its previous surge would be. But the economy continues to weaken. The surge prevented financial collapse but did not avert a severe downturn.
    • Aug. 2009 – Mar. 2010: Surge: Unemployment rate rises from 9.6% to 9.9%. The Fed – having observed the slide and the general poor economic outlook - tries more stimulus.
    • Mar. 2010 – Jan. 2011: Pause: Unemployment stabilizes (drop from 9.9% to 9.1%), but the labor market remains very soft. The previous surge delivered less than might have been hoped for.
    • Jan. 2011 – July 2011: Surge: Unemployment continues at its high level (9.1% to 9.0%).
    • July 2011 – Dec. 2013: Pause for a year and a half to see impact of prior surge: Unemployment drops from 9.0% to 6.7% as Fed essentially sits out the 2012 presidential election.
    • Dec. 2013 – Sept. 2014: Surge. The Fed concludes that – since inflation remains low – that the unemployment rate could be pushed down further. Unemployment drops from 6.7% to 6.0%.
    • Sept. 2014 – present: Pause. Unemployment drops from 6.0% to 5.0%. Fed implements a token slight rise in interest rates. Beyond that step, it essentially sits out the election. Inflation remains low.

    It is hard to look at this Fed policy history from 30,000 feet and conclude there is some active manipulation of monetary policy around election time. In the 2008 case, with the financial sector and the economy collapsing, of course the Fed became active. But it essentially sat out the 2012 election and has repeated that behavior in 2016.


    [1] The adjustments referred to in the adjusted monetary base involve changes in reserve requirements. None occurred during the period shown on the chart. The monetary base is currency in circulation plus deposits at the Fed by banks. 

  • 21 Oct 2016 11:47 AM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 10-24-16: Who Will Re-Invent Buffett’s Wheel First?

    Daniel J.B. Mitchell

    As numerous past musings have noted, back in the 1980s – when the U.S. trade deficit became particularly marked - financier Warren Buffett in a Washington Post op ed proposed a system to force balanced trade by means of a cap-and-trade type system.[1] Essentially, exporters would receive vouchers for each $1 of goods and services exported from the U.S. The vouchers would entitle the holder to import $1 of goods and services. The voucher could be exercised by the recipient or sold to someone else. Trade would become balanced since the value of exports would equal the value of imports.[2]

    At the time, the trade villain de jour was Japan which tended to follow mercantilist trade policies. But the Buffett system would not involve Japan-bashing by the U.S. since it was non-discriminatory as to the target of exports and the source of imports. A side effect, however, of the Buffett plan is that all countries wishing to export to the U.S. would have reason to put pressure on any country that sought to advantage its exports artificially through currency manipulation or other means.

    Today, the villain de jour is China but the argument for the Buffett plan remains the same.

    Despite the cogency of the Buffett proposal, it was never taken seriously by the folks Paul Krugman calls “Very Serious People.” Indeed, among those VSPs was (is) Krugman himself. The Buffett plan was easy to dismiss as “protectionist” or simply as an idea that didn’t come from a professional economist.

    The trade issue has made it back into the public conscious thanks to the presidential election campaign. Bernie Sanders raised the issue on the Democratic side, but – of course – he did not become that party’s candidate. Donald Trump raised it on the Republican side and did become the GOP nominee. Exactly what Sanders would have done if elected about the trade issue was never clear (to me). Trump says he would make better trade deals and somehow address currency manipulation. I don’t find that approach to be much clearer than Sanders’. The only comprehensive proposal anyone has ever advanced is Buffett’s.

    I have noted in past musings that bringing U.S. trade into balance would not restore some manufacturing golden age of job opportunities. Thanks to rising productivity and technological change in manufacturing, we are not going back to an era where manufacturing stood in its share of employment circa the 1950s and 1960s. But balanced trade would make the current manufacturing sector bigger by, say, 15-20% than it currently is. And that impact would provide some immediate relief to the displaced workers who are currently in play in terms of political affiliation. It would do so in ways that alternative proposals aimed at that group for such benefits as free community colleges – which seems to be the remedy of choice among the VSPs – can’t hope to do.

    At this writing, polls indicate that Hillary Clinton is the likely victor in the 2016 presidential campaign – not because of her trade policies, but because of being the not-Trump candidate. So now the question arises: In the political outfall that is likely to occur post-election, which party will reinvent Buffett’s wheel? Or will the issue Buffett was trying to address three decades ago remain unaddressed?

    There is already some indication that remedies to achieve balanced trade will be on the Republican agenda, in order to retain the disillusioned workers who were attracted to Trump and who might at one time been Democrats.[3] The VSP view – “those jobs are not coming back,” “protectionism,” “education is the key,” etc., that predominate on the Democratic side – is not going to be attractive to anyone except those who are stuck with such phrases. It’s up for grabs.


    [1] Warren E. Buffett, “How to solve our trade mess without ruining our economy,” Washington Post, May 3, 1987, p. B1. Available at: 

    [2] There would be some administrative issues regarding such services as tourism, verification of values of exports and imports, etc., as there are with any system.  


  • 21 Oct 2016 11:36 AM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 10-17-16: Most Economists

    Daniel J.B. Mitchell

    I happened to hear a public radio broadcast from NPR recently on the recent depreciation of the British pound and the Brexit vote of last June.[1] The program began with this sentence:

    Since the U.K. voted to leave the European Union last summer, the country's currency - the pound - has lost about 16 percent of its value against the dollar. Most of the damage, according to economists, was self-inflicted.

    It ended with this sentence:

    The pound dropped again this morning trading below $1.23. Most economists think it has yet to hit bottom.

    In between the beginning and the end, there was what you might expect. There were references to the Brexit vote of June, anecdotes on how foreign tourists in Britain were benefiting from reduced costs, etc. But let’s start with the beginning sentence which references the fall in the pound.


    Source: as of October 11, 2016.


    As the chart above shows, relative to the euro, the pound at this writing is about where it was during and in the aftermath of the Great Recession. Until the Brexit vote, it tended to rise relative to the euro. When the vote occurred, it fell. And the pound has generally fallen since.

    The NPR program describes the fall in the pound as “damage” which was “self-inflicted.” There is no doubt that that the Brexit vote was an Act of Man rather than an Act of God. But is it correct to view a currency depreciation as “damage”? Note that if the pound declined relative to the euro, it is also true (by inversion of the pound/euro ratio) that the euro appreciated relative to the pound. So was the euro-zone “helped” by its currency’s appreciation?

    Other things equal, the decline in the pound made British exports more competitive and imports to Britain less competitive. So on that dimension, you could just as well say Britain was helped and the euro-zone was damaged. Suppose you applied the same logic to the U.S. and its dollar that the NPR broadcast applied to Britain and its pound. The chart below shows an index of the U.S. dollar relative to the currencies of its trading partners since 2000.

    If you equate the exchange rate with national welfare, we were never as well off as we were just after the dot-com bust and the related recession. During the recovery from that recession, things got progressively worse if we use the exchange rate as our measure. The Great Recession then gave our welfare a big boost temporarily. But the recovery from that recession made us worse off again. We didn’t see a big improvement until the 2015-16 election cycle made the future of U.S. economic policy uncertain. If nothing in that interpretation makes much sense to you, you now can see the folly of identifying exchange rate trends with national welfare.


    Source: FRED database of the Federal Reserve Bank of St. Louis.


    There seems to be an underlying assumption in the NPR broadcast that the fact that Brexit inherently changed some fundamental determinants of the British exchange rate means the pound must be lower than it was. However, the demand for the pound is ultimately a function of the demand for British exports and for British investment assets (British stocks, bonds, real property, etc.); the supply of the pound is ultimately a function of British demand for imports and foreign investment assets. How the demand and supply will balance out once the dust settles, i.e., what the eventual long-term exchange rate will be, is unknown. It will depend on such things as British inflation relative to that of its trading partners and rates of saving at home and abroad. But note that the question of whether or not Brexit was a good idea for Britain in terms of its national economic and political welfare is simply not the same thing as the exchange rate.

    Well, that’s fundamentals. The broadcast closes with the idea that “most economists” think – was there a survey? – that the pound will fall further. That prognosis isn’t accompanied by a time period. Is it by tomorrow? By next week? Whatever the time period may be, it seems to be a short-term prediction. And if it is short term, it should also be the case that most economists are going short on the pound because they know it will soon fall. But is there any evidence that, for example, British economists have been putting their holdings in euros? Are they going further and borrowing pounds and then investing them in euro-denominated assets? If it were evident that the pound would be notably lower in value relative to the euro in the near future, the rush into euros would make it lower relative to the euro today.

    Bottom lines:

    1) Will the pound be lower tomorrow than it was today? If you say “yes,” you have about a 50-50 chance of being right.

    2) Is it a Bad Thing for Britain that the pound exchange rate is lower than it was pre-Brexit? Other things equal, depreciation of the pound stimulates British exports and discourages imports. So let’s just say that the answer is more complicated than assuming that national welfare moves with the exchange rate.

    3) Finally, what should NPR have said in its broadcast? Probably not much more than with the decline in the pound, Americans might want to consider a London holiday.


    [1] “Brexit Results Prove Increasingly Costly to Britons,” Morning Edition, October 11, 2016. Available at:

  • 02 Oct 2016 4:27 PM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 10-3-16: Fed Politics

    Daniel J.B. Mitchell

    At the September meeting of the Federal Reserve, interest rates were not changed. Presidential candidate Donald Trump indicated he thought the Fed was being political, i.e., helping his opponent. Of course, decision makers at the Fed are aware that there is an election cycle in progress. And, in a sense, they were responding to political events, although not in the way Trump suggests.

    First, it should be noted that the case for raising rates at this time is weak. Not only is inflation low, it is expected to remain low. As the chart below indicates, financial markets are not anticipating a burst of inflation. The spreads between conventional Treasury securities and inflation-adjusted Treasuries –indexes of the expected rate of price inflation - remain below 2%. In fact, except during the Great Recession and its immediate aftermath, expectations are lower now than they were during previous years when the economy was softer. Put another way, price inflation expectations remain low, despite a seemingly tightening economy, so maybe the economy isn’t all that tight.

    And that is a second point. Despite a decline in unemployment to around 5% (the latest monthly figure is 4.9%), it is still unclear if the low level of unemployment in fact signifies “full employment.” As has been much discussed, the employment-to-population ratio is below previous peaks. The latest UCLA Anderson Forecast featured a chart for California. The equivalent for the U.S. as a whole would show much the same thing. As can be seen below, the ratio is still lower than at the previous cyclical peak. One explanation is that demographic changes account for the reduction. When you make “corrections” for demographic trends, the current ratio seems to be roughly equivalent to the prior peak, as the straight line on the chart shows.

    But even with that correction, there remains ambiguity. Why not consider the peak before the prior peak, i.e., the dot-com boom peak? The demographic trend would produce a line with a similar slope from that peak, and we would be we below it. In short, the evidence for the proposition that we have hit some kind of capacity constraint is ambiguous. Which peak is really THE peak for comparison?

    You could look at wage behavior rather than at prices. There, we do find some evidence of wage growth acceleration. The latest data on the Employment Cost Index did show some acceleration. But as the chart below indicates, the series can be volatile. The series stayed at about 2% per annum during much of the recovery, blipped up in 2014, but then came down again. So surely a case could be made for waiting a few quarters before passing a final judgment and making a policy change.

    Employment Cost Index, Total Compensation, Private Sector, 12-Month Percent Changes

    On the output side, real GDP growth has been relatively anemic so far in 2016. Generally, recovery from the trough of the Great Recession to the present has been slow. Perhaps, not surprisingly, that fact has given rise to discussion of whether the post-World War II growth rate was anomalously fast. But even if there is now a “new normal” of real growth at around 2% per annum, we haven’t seen even that pace in 2016. During the first half of the year, measured real growth was a bit over 1% per annum.

    Bottom line: Even if 2016 were not a presidential election year, the Fed might have been dovish about raising interest rates. When there is uncertainty about diagnosing economic conditions, the tendency is not to shift policy.[1] Apart from the unknowns described above, however, the presidential election itself – or rather its outcome – has created an even larger element of uncertainty than price inflation, wage inflation, or real growth.

    One interesting feature of the September UCLA Anderson Forecast was an attempt at economic modeling of the election results by Forecast economist William Yu. In recent years, there has been an increased interest in political forecasting based on economic conditions. Yu developed a model across states and time using as explanatory variables general economic trends, demographics, past voting behavior, and other factors. He looked particularly at electoral votes in the “swing” states. And the conclusion was that the election was too close to call. Popular votes in the swing states hovered around 50-50 for the two major party candidates.

    Given the oddities of the 2016 election, and the possibility that there could be economic turmoil when the results are known, why would the Fed raise interest rates two months before Election Day? It’s likely that election uncertainty, more than inflation uncertainty, drove the decision. It’s fine to forecast, but surely confidence in the results of the exercise must drop when you don’t know who will be making policy. In particular, what a Trump presidency would mean for economic policy and economic outcomes is hard to predict. In that sense, you could say the decision at the Fed to wait until after the election before making a change in monetary policy was “political.”


    [1] Some participants on the Federal Open Market Committee did want to raise rates. None of the members of the Board of Governors, however, voted for such an increase. The degree of uncertainty regarding economic trends is reflected in the official transcript of the news conference after the September decision:

    Fed Chair Janet Yellen in response to a reporter's question after the interest rate decision: "...I think we are trying to understand some difficult issues. There is less disagreement among participants in the (Federal Open Market) Committee than you might think, listening to speeches and commentary. I think we all agree that the economy is making progress, that we are close to an unemployment rate that is one that’s sustainable in the longer run. We all agree we are undershooting our inflation goal, and that we want to make sure we stay on a course that raises that to 2 percent. And we’re struggling with a difficult set of issues about what is the “new normal” in this economy and in the global economy more generally, which explains why we keep revising down the (real growth) rate path..."


  • 23 Sep 2016 11:40 AM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 9-26-16: Measurement for What? – Part 2

    Daniel J.B. Mitchell

    I had hoped to be done, for a while anyway, with defined-benefit pensions with last week’s musing.[1] But it was not to be. Recently, the New York Times got interested in CalPERS, the giant California pension fund that covers most state employees (other than employees of the University of California [UC]) and many local government employees.[2] The essence of the Times article is that something nefarious is going on because when a local government wanted to terminate its pension plan with CalPERS, CalPERS used a low interest rate to calculate the liability that the government would have to pay to CalPERS. The rate was lower than CalPERS official expected rate of earnings which is otherwise used by CalPERS as a discount rate for liabilities generally.

    So let’s separate some issues here. Last week, we dealt with the question of using a lower interest rate for discounting pension liabilities than is used for expected earnings on pension assets. We noted that some folks argue that because the pension liability to employees and retirees is ironclad, the discount rate should reflect a riskless investment. What we indicated last week was that an ongoing pension plan has no finite duration and that, if over a long period it earns what it expects and funds the plan in accord with that expectation, it will have enough money in the till to meet its liabilities. Apparently, there are some folks (including an adviser to the UC Board of Regents) who think that if you expect to earn, say, 6% per annum, you fund accordingly, and you in fact earn 6%, you will still run out of money unless you discount liabilities by a lower rate than 6%. Simple arithmetic says that conclusion is wrong.

    Of course, the big IF in that idea is that you in fact earn in the long run what you today reasonably can expect. If you use an unreasonably high rate of return to discount future liabilities and to estimate future earnings, you will indeed come up short. The lesson is that you should use a reasonable rate. And you should keep adjusting that estimated rate based on incoming information. There is little question about that simple notion.

    What the Times seems to be saying is that using the lower rate for the local government that wanted to terminate its plan proves somehow that CalPERS’ expected earnings rate is too high and that really the termination discount rate for liabilities is what it should be using to calculate its funding ratio. To be fair, the article is not entirely clear about what is wrong, but the reader is left with the impression that using two rates shows something is phony. The headline with “two sets of books” suggests outright corruption and false measurement.

    According to the CalPERS actuary, CalPERS’ official estimate rate of earnings, 7.5%/annum, is too high. But that view has nothing to do with the rate charged to terminating plans. The actuary simply believes that, looking ahead, earnings will be lower over the long term than 7.5%/annum.

    CalPERS should be using a reasonable rate, as recommended by its actuary. If it isn’t, that failure is a sign of bad governance. (And there have been problems over the years with bad governance at CalPERS, including outright corruption.) But none of this concern is related to the low termination discount rate.

    CalPERS is actually a set of plans, not a single plan. If a local jurisdiction has a pension plan for its employees, CalPERS separately calculates its liabilities. Liabilities will vary from jurisdiction to jurisdiction depending on employee demographics and behavior. If a local government decides to terminate its plan, the plan still has a liability to covered incumbent workers and retirees. Termination shifts the plan from one with an indefinite duration to one with a finite end. Someday, the last participant will die and the plan will truly end. CalPERS must ensure that it has enough money in that plan to pay off that last participant. By law, it cannot take money from other jurisdictions’ plans and subsidize any remaining shortfall in the terminating plan.[3]

    Given the shift from an indefinite duration to a finite duration, and given the bar against moving money from one plan to another, CalPERS must take a reduced risk approach to the terminating plan. It must invest in low risk assets. So, of course, it uses a low discount rate because the plan itself will earn a low rate on its low-risk assets. There is nothing nefarious about using a lower rate for terminating plans; it is just prudent pension management.[4]

    In short, CalPERS does have management problems. It should be using realistic estimates of future earnings. But the fact that it discounts terminating plans using a low interest rate is entirely appropriate.




    [3] California governor Jerry Brown recently vetoed an ad hoc bill that would have allowed moving money from one plan to another because it would violate the longstanding principle that each plan is a separate entity. See

    [4] The California Legislative Analyst’s Office recognizes that prudence requires low-risk (and thus low-return) assets for plans that are terminating. It provided an analysis of a proposed ballot initiative (that never made it to the ballot) that would have phased out defined-benefit pensions and noted that “there are costs related to the closure of defined benefit plans. As these plans ‘wind down’ over the decades, their pension boards likely would change investments to those with lower risk and lower expected returns. This would result in higher costs for these closed plans.” Source:  

  • 17 Sep 2016 9:14 AM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 9-19-2016: Measurement for What?

    Daniel J.B. Mitchell

    I am going to take up defined-benefit pensions in this musing, but let’s start with a simple example. Suppose I wish to have $1,000 thirty years from now. How much should I put aside today as a lump sum to have that target amount in thirty years?

    Clearly, the answer has to depend on what I think I will earn over the thirty-year period with my lump sum. But it is hard to be sure what the earnings rate will be unless I invest in something that is relatively riskless and that has that thirty-year duration.  Let’s suppose that there is such an asset in the marketplace and it carries an annual yield of 4%. As it turns out, if I put aside about $308 today and invest it in that asset, I will have my $1,000 in thirty years.[1]

    However, suppose I were to invest in a reasonably prudent mix of stocks and bonds which are not as secure as the asset yielding 4%/annum, but which I believe – based on advice of experts – can be expected to yield 6%/annum. “Expected” is not the same as a sure thing, but the experts, looking at past long-term history and what they can see looking ahead, think 6% is a reasonable expectation. Of course, the experts note as a proviso that they could be wrong and that the actual result might turn out to be more or less than 6%.

    As it turns out, if I were willing to take the risk that the 6%/annum target might not be achieved, I could put aside only $174 today to get my hoped-for $1,000 in thirty years.[2] That is, if things work out as expected, my $174 will grow with compound interest at 6% and become the target $1,000. And had I instead put aside $308, I would find myself thirty years from now with an “extra” $771.

    Suppose further that in the period before I made the decision on how much to put aside today, expert advisors had been telling me that one could expect 7%/annum on average over a thirty-year period. But at the moment of decision, they told me that in view of recent adverse developments, they now believed 6% was more realistic. What would happen if I chose to believe that my advisors were being overly pessimistic due to recent developments (panicking) and that it would be better to keep the 7%/annum assumption? In that case, I would put aside only $131 today.[3] If keeping to 7% turned out to be correct, I would have my $1,000 in thirty years. But if my advisors were correct with their downward revision and the actual return was 6%, I would be short by about $254.

    The most obvious point of this tale is that the more I put away today, the more likely it is that I will have at least $1,000 in thirty years. Put another way, if I follow what a low long-term rate of return implies in deciding how much to put away, I increase the odds of at least achieving my target. Note that there are really two steps implied. First, I assume a low rate. Second, because I assumed a low rate, I decide to put more money away today. These are separable events.

    Suppose we translate these numerical examples into pension terms and suppose that actual rate of return turned out to be exactly the advisor-forecast of 6%/annum. The liability of my plan is $1,000 thirty years from now. If I had discounted that liability by 6% and had – as a result - put away $174 today, the plan would turn out to be fully funded. (My current ratio of assets to discounted liabilities = 1 or 100%.) If I had put away $308, the plan would be overfunded by 77%. ($308/$174 = 1.77) If I had put away only $131, my funding ratio would be only 75%. ($131/$174 = .75) I would be underfunded by 25%.

    I have gone through this arithmetic because the University of California (UC) defined-benefit pension is officially underfunded and the UC Regents – as plan trustees – periodically mull over what to do about it. The plan uses a methodology which estimates the discounted value of its liabilities to future retirees by using the same discount rate as the rate officially estimated by the UC Regents to be their long-term expected annual rate of return.  Currently, the officially expected rate of return is 7.25%/annum.[4] However, it is clear from public statements of the Regents’ chief investment officer and his staff that they (the CIO and his staff) believe a 6-ish long-term rate is more realistic going forward than a 7-ish rate.

    Moreover, there is at least one outside advisor on the Regents’ Committee on Investments who is arguing that even if the 6-ish rate is a reasonable expectation of the long-term return, the discount rate that should be applied to the liability of the plan is a 4-ish number.[5] That 4-ish number, in his view, seems to equate to what the State of California pays on long-term general obligation bonds. (The state in fact pays less than that because its bonds are tax-exempt, but if you adjust the yields to the equivalent taxable rate, they would have a 4-ish return.) His argument is that if the state’s pension liabilities are as firm as its bond liabilities, the same rate would be used for both.

    There are two different issues here. The first is whether the Regents should lower their official expectation of a return to the level their own expert is telling them is appropriate (i.e., from a 7-ish expectation to a 6-ish rate). Presumably they should, unless they truly believe he is being panicked by recent developments and that the old official 7-ish assumptions are still valid. But if they instead believe what their expert is saying to them now, that belief will tell them that the plan is more underfunded than current methodology indicates. So the corollary is that if they lower the official expected return, they should also up the contributions to the plan appropriately. Again, as in our earlier example, there is a two-step process here. First, a change in the assumption and, second, acting on that assumption.

    If they don’t take the second step, the plan would gradually have a lower and lower asset balance relative to liabilities and would someday become a pay-as-you-go arrangement. That is, each cohort of employees would – at some date in the future - end up being “taxed” to pay the pensions of previous cohorts. As a practical matter, without the availability of earnings from a large investment pool of assets, the pay-as-you-go cost would be very high.

    Another issue, however, is whether it makes sense to have a lower discount rate than the expected earnings rate, as the outside expert is arguing. Lowering the discount rate for liabilities will up the measured value of those liabilities and thus lower the measured value of the funding ratio. If more contributions result, the likelihood of at least having the assets needed to pay those liabilities goes up. But note that measured value is not the same as actual value. The actual value will depend on actual future earnings. If you believe those earnings to be 6-ish, and yet you fund on the basis of 4-ish, your plan assets will gradually rise relative to liabilities and will do so indefinitely. Presumably, at some level of overfunding, the contributions would be halted.[6]

    A key point about a pension plan – which differentiates it from the $1,000-in-thirty-years example – is that a pension plan goes on indefinitely; there is no finite maturity. On a regular basis, estimates of liabilities and expected future rates of return should be adjusted iteratively. There really isn’t a behavioral reason to pick a number such as 4% for discounting on the grounds that pension liabilities are in theory similar to state bond liabilities. The 4% vs. 6% differential might be taken to be an indication some measure of risk to employees. That is, other things equal, employees might be willing to contribute more to a plan which guaranteed an outcome rather than one that produced only an expected outcome. (But there is much research to suggest that ordinary folks have big problems in evaluating risk.)

    Just blowing up the measured liability relative to assets by assuming a lower discount rate than the assumed earnings rate might well have perverse results. It could make the current underfunding problem seem intractable and lead to Regental paralysis and no solution. The purpose of computing the funding ratio is managerial, not theoretical. You make the calculation to help guide management decisions.

    The best outcome, and the one most feasible, would be for the Regents a) to lower their expected earnings rate (and their liability discount rate) to what the CIO and his staff are suggesting is reasonable, and b) to modify their funding plan to accord with that rate, used as both the projected earnings rate and the discount rate. And they should follow that approach periodically and make iterative adjustments as needed.


    [1] $308.32 x (1.04)­­­30 = $1,000

    [2] $174.11 x (1.06)30 = $1,000

    [3] $131.37 x (1.07)30 = $1,000

    [4] The rate was recently lowered from 7.50%.

    [5] You can hear the September 9, 2016 meeting of the committee at The comments of the outside advisor are at approximately 1:29 on the audio recording.  

    [6] We know in fact that when the plan became overfunded in the 1980s as officially measured, contributions were halted for two decades. So there is evidence that actual contribution behavior responds to the measured value of the funding ratio. We know in fact that – with a long lag – when the official measure of the ratio came to show underfunding, contributions were resumed. There may well be asymmetry between the reaction to overfunding and the reaction to underfunding. And, as noted below, a high measured level of underfunding could cause paralysis.

  • 12 Sep 2016 8:23 AM | Daniel Mitchell (Administrator)

    Mitchell’s Musings 9-12-16: Chicago

    Daniel J.B. Mitchell

    We are used to the idea that movies are rated for “mature” content (PG-13, R, etc.). TV shows also are similarly rated. Sometimes radio and TV programs are prefaced with a statement that there is language some might find offensive. So is there any real difference between these types of warning systems and the “trigger warnings” for college courses that have been in the news in recent years?

    The use of such warnings in course syllabi – in contrast to movie, TV, and radio warnings – has produced (triggered?) substantial controversy. An engineering professor at Auburn University recently got himself fifteen minutes of fame by putting the statement "TRIGGER WARNING: physics, trigonometry, sine, cosine, tangent, vector, force, work, energy, stress, quiz, grade” at the top of his syllabus as a parody.[1] Much more attention was paid – because it wasn’t intended as a joke – to an orientation statement from a dean at the University of Chicago: "Our commitment to academic freedom means that we do not support so-called trigger warnings, we do not cancel invited speakers because their topics might prove controversial and we do not condone the creation of intellectual safe spaces where individuals can retreat from ideas and perspectives at odds with their own."[2]

    Obviously, despite the noncontroversial precedent of movie ratings, the trigger movement is being taken by some as a serious matter, although how widespread the concern is among practicing academics can be questioned.[3] In particular, the Chicago statement received substantial attention and applause – more from the political right than elsewhere.[4] LA Times columnist Maureen Daum – who is not a right-wing observer – argued that the use of triggers, by itself not a big deal in her view, has become mixed up with “the much larger phenomenon of leftist groupthink masquerading as liberalism.”[5] That is, in her view a practice that is relatively harmless has come to stand for something else.

    Note that the Chicago statement is self-contradictory. In order to protect free speech, it seems to support a ban on the use of a type of statement (speech) by an instructor on a syllabus. You can argue whether a policy of “not supporting” use of trigger warnings is an absolute ban on them. But given that statement, if you were a junior (untenured) faculty member at Chicago, might you not be reluctant to use a trigger? Would you want to be doing something which the university doesn’t “support”? The statement, after all, is being made by a campus authority figure. If the dean were just expressing his personal opinion, he could have said so, and then further clarified that he was not articulating an official policy. But that wasn’t what he did.

    There is a difference between someone deciding not to go to an R-rated movie to avoid the offensive content and a student responding to a trigger warning on a syllabus. The former situation represents a choice about an entertainment. The latter might be interpreted by the student as meaning you don’t have to read material that might offend you and yet you would nonetheless receive credit for the course. Or it might be taken to mean that you don’t have to read what offends you and that the instructor is bound to supply you with alternative readings that you prefer.

    Just putting a trigger warning on a syllabus without further comment is potentially confusing. If the potentially offensive readings are nonetheless required, the syllabus should so indicate. You don’t have to go to an R-rated movie and there really is no consequence if you don’t. But if you take a course, you do have to do the assigned work. And some courses, moreover, are required for completing a major, or even to graduate. A syllabus trigger warning has an element of importance that a movie rating does not. So it needs to be explained.

    I am unaware of any university requiring the use of trigger warnings. And it is not clear how you would mandate their use without also defining what kind of content is offensive. Is it any content with violence? Any content with sex? Would the warnings cover reference to wars – inherently violent - in history courses? Could students complain about a lack of a warning on any topic of their choosing in some university tribunal? That kind of complaint mechanism could lead to de facto censorship by anyone who didn’t like the way a topic was discussed.

    So we might add a proviso to our view that the Chicago dean should have made it clear that he was merely expressing a personal opinion and not a university mandate (if he was just expressing a personal viewpoint). Sometimes, norms can become quasi-mandates without any official policy change. What started out as a fad of adding trigger warnings could become an expectation of more. Harvard law professor Jeannie Suk Gersen noted in a New Yorker article that the discussion in criminal law classes has been impeded by student objections to including as a topic the law of sexual assault.[6] That is, the demand for trigger warnings – which are not mandated by Harvard – has morphed into a demand to omit a section of the curriculum that lawyers are supposed to know something about.

    Perhaps, therefore, the Chicago dean might better have said that while the use of trigger warnings was a matter of instructor discretion, their use did not mean that potentially offensive topics would be avoided or that dealing with such topics is at the option of those enrolled. My guess is that is what he meant to say.

    As far as articulating official policy, he might further have focused on what the University of Chicago is not doing. Chicago is not creating the “bias-response teams” which have embarrassed other universities by censoring courses and speech in response to someone’s complaints of offense. You don’t have to be an expert in organizational behavior to know that such teams, once they are created, will seek out work to do to justify their existence. At least two universities that created such teams have had to disband them after the teams started to do what they shouldn’t do or seemed poised to do so.[7]

    Daum is correct; trigger warnings by themselves are relatively harmless. But if they are used, their implications should be explained to students. And going beyond such warnings in the direction of institutionalized thought police should be avoided. There will always be a few cases of academics who misbehave in some extreme manner – teaching wacko conspiracy theories or whatever. But they can be dealt with in an ad hoc manner. And, if you are wondering about yours truly, I don’t have anything labeled “trigger warnings” on my current syllabus, but I have long provided a very brief description of what each assigned item is about. And it is clear that all the items listed as assignments are required.




    [3] NPR reported on a survey it conducted – which it qualified as nonscientific – in which half of college instructors said they used trigger warnings. That result seems implausible since there are many fields – math, the sciences, computer science, etc. – where (other than the parody of footnote 1) – it’s hard to see how they would be used. See (Would there be a warning for religious fundamentalists that there is scientific evidence of evolution or that there is evidence that the Earth is more than 6,000 years old?)

    [4] However, the Chicago approach was supported by a former Obama administration official:





12/30/13 12/24/12 12/26/11
12/23/13 12/17/12 12/19/11
12/15/13 12/10/12 12/12/11
12/9/13 12/1/12 12/5/11
12/2/13 11/26/12 11/28/11
11/25/13 11/19/12 11/21/11
11/18/13 11/12/12 11/7/11
11/11/13 11/5/12 10/31/11
11/4/13 10/29/12 10/24/11
10/28/13 10/22/12 10/17/11
10/21/13 10/15/12 10/10/11
10/14/13 10/8/12 10/3/11
10/7/13 10/1/12 9/26/11
9/30/13 9/24/12 9/19/11
9/22/14 9/23/13 9/17/12 9/12/11
 9/14/159/15/14 9/16/13 9/10/12 9/5/11
9/8/14 9/9/13 9/3/12 8/29/11
9/1/14 9/2/13 8/27/12 8/22/11
8/25/14 8/26/13 8/20/12 8/15/11
8/18/14 8/19/13 8/13/12 8/8/11
8/11/14 8/12/13 8/6/12 7/25/11
8/4/2014 8/5/13 7/30/12 7/18/11
7/28/14 7/29/13 7/23/12 7/11/11
7/21/14 7/22/13 7/16/12 7/4/11
7/14/14 7/15/13 7/9/12 6/20/11
7/7/14 7/8/13 7/2/12 6/13/11
6/30/14 7/1/13 6/25/12 6/6/11
6/23/14 6/24/13 6/18/12 5/30/11
6/16/14 6/17/13 6/11/12 5/23/11
6/9/14 6/10/13 6/4/12 5/16/11
6/2/14 6/3/13 5/28/12 5/9/11

5/27/13 5/21/12 5/2/11

5/20/13 5/14/12 4/25/11

5/13/13 5/7/12 4/18/11

5/6/13 4/30/12 4/11/11

4/29/13 4/23/12 4/4/11

4/22/13 4/16/12 3/28/11

4/15/13 4/9/12 3/21/11

4/8/13 4/2/12 3/14/11

4/1/13 3/26/12 3/7/11

3/25/13 3/19/12 2/28/11

3/18/13 3/12/12 2/21/11

3/11/13 3/5/12 2/14/11

3/4/13 2/27/12 2/7/11

2/25/13 2/20/12 1/31/11
 2/16/152/17/14 2/18/13 2/13/12 1/24/11
2/10/14 2/11/13 2/6/12 12/15/10
2/3/14 2/4/13 1/30/12 12/9/10
1/27/14 1/28/13 1/23/12
1/20/14 1/21/13 1/16/12
1/13/14 1/14/13 1/9/12
1/6/14 1/7/13 1/2/12
Employment Policy Research Network (A member-driven project of the Labor and Employment Relations Association)

121 Labor and Employment Relations Bldg.


121 LER Building

504 East Armory Ave.

Champaign, IL 61820


The EPRN began with generous grants from the Rockefeller, Russell Sage, and Ewing Marion Kauffman Foundations


Powered by Wild Apricot Membership Software