Mitchell's Musings

  • 17 Aug 2015 2:55 PM | Daniel J.B. Mitchell (Administrator)

    From time to time, these musings have talked money, specifically currency exchange rates. There has been much news media discussion in the past week about the recent China devaluation of the Yuan. So let’s start with terminology. A freely floating currency – one that is not “manipulated” – sometimes rises in value relative to other currencies and sometimes falls. It sometimes appreciates and sometimes depreciates. That’s the terminology we use for those movements when they occur due to market forces. No government or central bank or any other official agency is responsible. Appreciation or depreciation just happens due to market conditions.

    In contrast, the word “devaluation” implies that a policy decision was made to change (lower) a currency’s value relative to another. The word devaluation, put another way, implies “manipulation.” (The opposite is “revaluation.”) You can’t talk about a “devaluation” and then ruminate over whether manipulation has occurred. It has occurred by definition.

    Now there is a longstanding history in international monetary affairs about the pluses and minuses about systems of fixed exchange rates, freely floating exchange rates, and arrangements that fall somewhere in between. The old gold standard was a system of fixed exchange rates. The Bretton Woods system set up towards the end of World War II was a fixed exchange rate system. When Bretton Woods fell apart in the early 1970s, a mix of arrangements developed. The U.S., however, largely left its dollar to float. Some countries attempted to maintain fixed arrangements relative to one another but float against others. Some tried to keep their currencies within a band of some other currency. The Eurozone eventually formed and some countries abandoned their internal currencies for a common, international currency. There were experiments with “currency board” systems in which a country pegged its currency to another through a kind of central bank operating by formula.

    To the extent that a country chooses to have some say in its currency’s value, and not leave the exchange rate entirely to market forces, there has to be some regime of regulation and/or intervention in currency markets, i.e., “manipulation.” You can debate whether the result of such manipulation is a Good Thing or a Bad Thing, but (again) that there is manipulation taking place is not a matter for debate.

    When you look at China’s trade with the U.S. as shown on the chart, there is an anomaly. You have a rich country – the U.S. – borrowing from a developing country, essentially to finance current consumption. As the chart above indicates, that odd situation has persisted for a long time. The U.S. trade deficit with China now is close to 2% of U.S. GDP. That figure may not sound like much. But, to put it in perspective, the peak-to-trough drop in U.S. real GDP during the Great Recession was around 4%. So the impact of 2% is hardly negligible.

    China, from time to time, says that it wants to move (gradually) to a floating exchange rate. But the chart surely suggests that what it has been doing is maintaining an undervalued currency. The recent devaluation was said to be part of a move to a market exchange rate. Numerous journalists repeated that interpretation. But any such move would have to be a revaluation (increase in the Yuan’s value), not a devaluation.

    Moreover, as noted earlier, the very word “devaluation” implies an official policy, not some blind market force. So it’s hard to get away from the fact that China views the exchange rate as a macroeconomic tool, not something to be left to the forces of currency markets. It “manipulates” its currency value. Actions speak louder than words, although apparently not to those journalists who repeated the self-contradictory move-towards-the-market story. The Chinese economy was slowing down and to stimulate demand, the Yuan was devalued by the powers-that-be in China. It isn’t complicated. No elaborate interpretation is needed. And it’s a move away from the market, not towards it.

    What is the impact on the U.S.? Again, the story is not complicated. If boosting Chinese exports to the U.S. and discouraging Chinese imports from the U.S. is a stimulus to China’s economy, it has to be a negative, other things equal, for the U.S. Commentators quickly chimed in to say that, yes, there is a negative effect, but it will be small. After all, the trade deficit with China is only 2% of the U.S. GDP and the devaluation will only have an incremental effect on that pre-existing deficit.
    The problem, however, is that for a country such as the U.S., marginal shifts in trade patterns are always in some sense small in their overall impact. But they can be large in individual industries or sectors. As we have pointed out in past musings, manufacturing – which is now only about an eighth of U.S. GDP - is especially affected by trade shifts since much of trade involves manufactured goods. So what happens in trade and exchange rates matters to manufacturing and to jobs in manufacturing.

    Rather than discuss what the Chinese devaluation means – when it’s obvious what it means – isn’t it time to revisit U.S. exchange rate policy, a policy discussion that really hasn’t occurred since the early 1970s when fixed exchange rates were abandoned? At that point, the U.S. essentially said it would not intervene in exchange markets in order to affect the value of the dollar, but that other countries could do what they liked. The problem with that approach is that an exchange rate inherently involves two currencies; it is the price of one currency relative to another. So the decision to float the dollar and let others do what they liked was essentially a de facto decision to let other countries “manipulate” the value of the dollar, if they so wanted. It might have been the best policy choice back then. But over four decades later, it’s time for a review.

  • 10 Aug 2015 2:17 PM | Daniel J.B. Mitchell (Administrator)

    Robert Lawrence of the Peterson Institute posted a video which purports to resolve the issue of why real wages have lagged productivity since the 1970s.[1] He starts with a chart showing a gap opening up between average hourly wages of production and nonsupervisory workers deflated by the Consumer Price Index (CPI) and output per hour (productivity) as measured by the U.S. Bureau of Labor Statistics (BLS). In steps he adjusts the real wage series by adding in employees other than nonsupervisory workers, taking account of benefits received by workers (which are not included in the average hourly earnings series) and noting that the price index used to deflate the output numerator in “output per hour” differs from the CPI and that the former rises slower than the latter starting in the 1970s. So if you use wages and benefits for all workers and if you deflate those wages by the deflator for output rather than by the CPI, the puzzle disappears except for the period after the Great Recession.

    It’s worth noting that there is no law of the universe that says that real wages (however measured) must rise with productivity (however measured). The idea that the two series should be linked derives from the observation that they appeared to be moving together after World War II as an empirical matter. Furthermore, there seemed to be two notions that there should be a linkage beyond the mere empirical observation. To explore the proposition, let’s represent the idea in the abstract:

    Let W = a measure of nominal wages, P = a general price index, Y = a broad measure of national output in nominal terms, and L = labor hours. Saying real wages rise with productivity is equivalent to saying:

    W/P = s(Y/P)/L, i.e., the real (deflated) wage (W/P) is proportionate to real (deflated) output (Y/P) divided by labor hours input L and where s is a coefficient of proportionality.

    Note that you can rearrange these terms to become s = WL/Y, i.e., s turns out to be labor’s relative share of national output. So the assumption that real wages rise with productivity is another way of saying that labor’s relative share of national output is constant. Note, for later reference, that this rearrangement is entirely in nominal dollar terms; there is no price index involved.

    Some observers see (or saw) a moral element in having real wages rise with productivity; some see (or saw) a moral element in labor’s relative share being a constant. In the former case, there seems to a Puritan Ethic-type morality behind the idea that the way workers get ahead is through producing more. Work harder and you will advance! In the latter case, perhaps it is just seen as fair that labor and capital each share proportionally as the economy grows. I am not saying that these are good ways to look at the relationship; only that there is a certain appeal to the concept from various moral angles.

    There is also an historic link to the history of wage-price controls and guidelines and the real wage-productivity relationship. We can also rearrange our starting equation as:
    P = (1/s)[WL/(Y/P)], where the term in brackets [ ] is the average wage cost of a unit of real output or what is called “unit labor costs.” If s is a constant, so is 1/s, and the revised equation says that prices are proportionate to unit labor costs. An interpretation is that firms use some kind of markup pricing when aggregated so that if you can set (or limit) the rise of unit labor costs, you can set (or limit) the rate of inflation. Control wages, the nominal element in unit labor costs, and you can control prices. Crudely, your guideline for nominal wage increases should be your target rate of inflation plus the expected long-term rise in productivity. Such rules were used in the Kennedy-Johnson wage-price guideposts program and the Nixon-era wage-price controls.

    Note that there is nothing about wage equality or inequality involved in these notions. And it is more or less assumed that W is an aggregate measure (an average wage of everyone) and that if some element of pay comes in the form of benefits, it is included in W. Similarly, it is more or less assumed that P is a general measure of prices and that it is used both to deflate output and to deflate wages. Since P is an average, nothing precludes some prices from rising faster than others. Since W is an average, nothing precludes particular wages, say for certain occupations or groups, from rising faster than others.

    The idea that real wages either should, or do, rise with productivity in the abstract doesn’t deal with inequality of wage growth within the workforce and certainly includes payment for labor in the form of benefits. So let’s take a look in the chart below at the BLS data set that most closely adheres to the broad concept. Such a data set can be found in the various series connected to output-per-hour (productivity). The price index used to deflate wages (which include benefits and pay for all workers) is the Consumer Price Index. The broadest sector available is the “business” sector which is essentially all private business plus government enterprises that are quasi-commercial such as the Postal Service, transit operators, etc.

    It is well known that productivity, as measured by BLS, has a cyclical component so the chart above uses business cycle peaks (except for the latest available year, 2014) to adjust for such effects.[2] The real wage and the productivity measures do seems to diverge starting in the 1970s, although pinning down the precise year in which the divergence occurred would be difficult since the two data series never moved precisely in lock step.

    It’s also true, as Lawrence noted, that much of the divergence seems to be based on the faster growth of the CPI relative to the deflator used for determining real output, as can be seen on the chart above. However, Lawrence seems to take the two indexes to be “true” for their different purposes. That is, the CPI is supposed to be truly a valid measure of worker consumption over the decades while the deflator used to turn nominal output into real output is truly valid for that purpose.

    But there are problems in assuming, particularly over long periods, that abstract concepts of worker welfare or estimates of aggregations of the diverse outputs of a complicated economy in real terms are somehow uniquely defined. Consider the CPI. It has undergone various methodological changes over the period shown on the charts. Yet, because it is used for indexing in legally-enforceable contracts, BLS never revises it retroactively since that would upset its consistent history. Instead, one methodological version is spliced onto the previous version going forward.[3]

    For example, during the 1970s, the BLS measure of housing costs was determined by a methodology that gave heavy weight to (mortgage) interest rates. Before the 1970s, such rates did not fluctuate much but then, in part because of a pickup in inflation, the rates began to move. Eventually, a different methodology base on rental equivalents of owner-occupied housing was installed, but not retroactively.

    And there have been other such changes, especially with regard to substitution effects and quality adjustments.

    The deflator used for output by BLS is really part of the national income accounts. There, too, methodology has changed over time, but unlike the CPI, such changes are often incorporated retroactively back to arbitrary dates. And the methodological changes introduced, while they are aimed at actual theoretical problems, are typically chosen from a set of reasonable approaches. Put another way, there are alternatives which might have been chosen that would have produced different results.

    In short, it is hard to say when you look at the divergence between the official measure of real wages and the official measure of productivity, what the question(s) should be. Saying the divergence is largely due to workers’ typical consumption baskets somehow systematically differing from the output basket starting in the 1970s assumes that we have the “right” price indexes for both of the baskets. But maybe productivity isn’t growing as fast as the official measure suggests. If the “true” price index is more like the CPI and less like the official deflator, measured productivity would rise more slowly. It all depends on how much faith you have that we have the right price indexes.

    We can, however, take the abstract concept that real wages should, or do, or used to, rise with productivity and get rid of the uncertain price index element entirely. As we noted above, that concept is equivalent to saying that labor’s relative share of output – at least adjusted for the business cycle – is more or less constant. Labor’s share and output can be measured in nominal terms; no price index is required. We don’t have to worry about price index methodology. So let’s look at the share over time.

    As the chart above shows, the share seems to have started slipping in the 1960s. It flattened in the 1980s and staged a partial comeback in the 1990s. (Did high tech and finance sector pay hikes during the dot-com boom cause the partial reversal?) Then labor’s relative share declined, notably starting BEFORE the Great Recession took effect, and continued to decline thereafter.

    In short, if I had to choose a research project based on these observations, I wouldn’t focus on why worker consumption basket prices differed from output basket prices – because there are too many iffy methodological adjustments in our price indexes. I would instead focus on an issue that doesn’t depend on price indexes at all. What explains the movement, adjusted for the business cycle, of labor’s relative share? Why did it start to decline in the 1960s? What gave it a temporary partial boost in the 1990s? And what happened to the share after the end of the dot-com boom?

    [1] Joshua W. Mitchell alerted me to this source.

    [2] We start the chart in 1953 to avoid effects of World War II wage-price controls and Korean War controls. There was a double-dip recession after 1979 so we skip the middle “peak” of that episode on the chart. Otherwise, peaks are based on NBER business-cycle dating.

  • 03 Aug 2015 11:16 AM | Daniel J.B. Mitchell (Administrator)

    Let’s start by conceding the obvious (to anyone who knows even a little about labor market statistics). Educational attainment is positively correlated with good job outcomes. More educated workers are generally paid more and have lower unemployment rates, as the chart below from the U.S. Bureau of Labor Statistics (BLS) shows. Correlation isn’t causation, of course. Some observers would argue that much of what we see as correlation is due to what was once called “creeping credentialism.” In that view, more education is increasingly being required for jobs that don’t objectively need it. Various stories can be told that could, in theory at least, produce such a creep. But for purposes of this musing, and given the strength of the correlation and its persistence, let’s go further and concede some degree of causation between more education and good results in the labor market.

    Source: U.S. Bureau of Labor Statistics, (as of 7-28-2015)
    The link between education and good labor market outcomes seems to have been driving federal policy of late, particularly when it comes to higher education. There has been a push for various goals linked to obtaining a four-year college degree. There was much talk of having “debt-free” college graduates, of free tuition at community colleges (which typically can take students half way to a four-year bachelor’s degree). There was also a push at the federal level to rate colleges with some kind of uniform scoring, an attempt that now seems to have been semi-abandoned in favor of more generalized “accountability.”1 Such issues seem to be likely topics for the upcoming 2016 presidential race. But the emphasis on higher ed seems misplaced.

    It’s easy to move from the observation that on average (an important qualification that can hide much variance around the average) someone will benefit in future employment from completing college to an implicit policy that everyone should go to (and should complete) college. But viewed only from a labor market perspective, college completion is just an instrument for improved employment outcomes, not a goal in itself. (And many academics, particularly those in programs that don’t lead to professional degrees, would object to viewing education as a purely job-related pursuit.)

    BLS projections don’t indicate that the occupations with the most employment expansion are those that require college degrees. Below is an excerpt from a BLS table showing employment projections through the year 2022. The occupations with the most absolute employment growth (a characteristics more relevant than percentage growth when it comes to job opportunities) clearly are not those which BLS rates as requiring a four-year bachelor’s degree.

    In short, having everyone go to college on the assumption that all would end up in better jobs – even if universal college completion were a realistic goal – would likely produce a labor force of overqualified – and possibly frustrated – workers. Someone would have to do the jobs listed on the table above. Note also that, by definition, there would be no college premium in terms of pay or any other measure if everyone completed college. Nor would the pay level currently seen as an average for college grads likely be what prevailed if everyone were a four-year graduate. Labor market policy would better be focused on improving standards for those folks in jobs which don’t require a college degree and who don’t have such degrees.

    [1] See;; and;

  • 27 Jul 2015 10:12 AM | Daniel J.B. Mitchell (Administrator)
    It has long been noted that women receive lower pay on average than men. Once that basic observation is made, it leads to a research literature as to why. Is it due to discrimination? Or is it due to other factors such as occupational choice or education or experience? The approach at that point is to standardize statistically for whatever you might think leads to pay differentials in the labor market and then see if there remains an unexplained differential after controlling for those independent influences. If there is an unexplained component, that remaining gap is attributed to discrimination or at least to unknown influences. (A similar literature exists for race or ethnic pay differentials.)

    As interesting as that literature may be, there is another approach equally of interest. We can look at the trend in the ratio of earnings between females and males over time. The U.S. Bureau of Labor Statistics has tracked worker-reported “usual weekly earnings” of full-time employees on a quarterly basis since 1979. The chart below shows annual data from that series through 2014. There are several notable aspects of the female-to-male pay ratio. The first is that it generally has risen over time, although not without interruption. The periods of especially rapid rise seem to occur during recessionary periods such as the early 1980s, the early 1990s, the early 1990s, and the early 2000s. These are periods when the male-oriented manufacturing and construction sectors were especially hard hit.

    The fact that the female/male pay ratio has tended to rise over the past three and a half decades suggests a bifurcated labor market. In relative terms, although not in absolute pay terms, women’s labor market prospects were improving relative to men’s. That relative gain doesn’t mean that the gain in some absolute sense was remarkable. When you look at the chart below, which presents pay in inflation-adjusted terms by sex, male pay is going nowhere (it actually trends slightly down, falling about a quarter of a percent per annum end-to-end) while female pay is showing modest gains (rising about half a percent per annum).

    The limited pay gains in real terms of American workers is a much discussed phenomenon and another subject. But what about the bifurcation seen in the two charts so far? Was that a continuation of an earlier pre-1979 trend?

    Since the BLS data used for the two charts shown so far begin in 1979, they can’t answer that question. However, the U.S. Bureau of the Census has related data that go back further in time. So we can look at the trend in Census data before and after 1979. The Census data record annual earnings of full time and full year employees by sex. They go back to 1960 on a reliable basis.[1] The chart below shows the trend from 1960 through 2013. As can be seen, in general terms, the post-1979 period is similar to what we found using BLS data. That is, the trend is upward and there seems to be a boost in the trend during recessionary periods.


    It appears, however, from Census data that there was a breakpoint around 1979. Before that time, although the female/male pay ratio bounced around, it showed no upward trend. Then it began an ascent.

    If the ratio showed no trend in that earlier period, that fact had to mean that males and females were experiencing about the same real and nominal pay changes. From 1960 to 1979, real female pay rose about 1.8% per annum end-to-end and real male pay rose about 1.7%. Thus, in a relative sense – not an absolute sense – there was a unisex labor market before the 1980s and a bifurcation thereafter.

    Although the absolute pay gap between males and females and its causes remains an interesting research topic, getting a sense of what moved the labor market in a relative sense from unisex to bifurcation is equally important and, perhaps, less well researched. There are some obvious potential causes: the political shift to the right in the 1980s, the sharp decline in unionization, irreversible drops in employment in some key male-oriented sectors of manufacturing, e.g., steel, during the rise of the U.S. dollar in international exchange markets. Perhaps there was a burst of technological change that had a more adverse effect on males than females in the labor market. But whatever the cause(s), the shift in the labor market from the 1970s to the 1980s seems to be an understudied topic.

    [1] Census has data back to 1955 but cautions that the pre-1960 figures are unreliable.

  • 20 Jul 2015 9:18 AM | Daniel J.B. Mitchell (Administrator)

    There has been a succession of stories concerning universities attempting to deal with complaints about sexual harassment and assault. Growing pressure on universities to do something about the issue has led to creation of internal adjudication processes that sometimes take on Orwellian aspects and sometimes simply lack appropriate and basic due process. Legal scholars have been pointing out the problem for some time. But two recent cases have highlighted the issue.

    The first involved a professor – Laura Kipnis at Northwestern University - who was accused of writing an op ed about Title IX [1]– a federal requirement related to discrimination on the basis of sex - that made some students uncomfortable (they said). While anyone can file anything, at some point in the investigation, the university authorities began to take the charge seriously and seemed to forget about academic freedom. Prof. Kipnis was writing about a public policy matter. When Prof. Kipnis exposed the proceedings, there was an Internet storm and the charges were dropped.[2]

    The second case involved a court decision that went against the University of California, San Diego. In that matter, one student accused another of non-consensual sex. Note that an accused student is likely to have fewer resources than a tenured professor for challenging university procedures (as in the Kipnis matter). Nonetheless, after the accused student was suspended and took his case to court, the judge in the case found that basic due process had been lacking. There was also evidence that a university official had added to the penalty imposed on the accused student in retaliation for his eventual recourse to the judicial system.[3] Whether the university will appeal the verdict is not known at this writing. However, the court decision led to an editorial in the Los Angeles Times questioning the ability of universities to provide fair proceedings. The editorial concluded:

    If schools are going to remain in the business of handling allegations of sexual assault, they must be sure victims are treated with respect, that complaints are taken seriously and pursued vigorously, and that the basic rights of the accused are not abridged.[4]

    What seems clear is that universities are not well equipped to handle such cases. If they hire officials whose job it is to prosecute as well as investigate, there is a built-in conflict of interest, as the San Diego case makes clear. As it happens, the University of California’s Board of Regents is at present trying to come up with rules and procedures to deal with sexual harassment and assault adjudication. There have been vague assurances from the university’s central administration that the eventual machinery to be proposed will be fair to the accused. However, as long as the adjudication process is entirely in-house and run by university officials (sometimes with student panels), the issue of lack of due process will remain.

    But there is a potential solution: outsourcing the final step in the process to professional arbitrators. Note that this solution is one which universities that have unionized employees regularly use in the labor relations context. If it can work there, why should it not be utilized in the area of sexual harassment and assault?

    I am going to put aside the issue of whether university adjudication processes should be used for complaints of conduct that, if it occurred as charged, is criminal. There is an argument to be made that at least some complaints should be referred to local police or – if the university has its own police department – to that agency. Nevertheless, let’s assume, for purposes of this musing, that universities – perhaps because of Title IX or for other reasons – will feel that they need to have an internal mechanism for all complaints.

    Because the union sector has declined drastically over the past few decades, its procedures for grievance adjudication may not be well known by top university officials. Even in universities that have collective bargaining for some employees, labor relations may be compartmentalized so that those decision-makers not directly involved in union-management issues may not be fully aware of the workings of systems of grievance and arbitration. So let’s review what a typical system entails.

    If an employee has a grievance, there is generally some informal review which may resolve the matter. Absent an informal resolution, there follows a more formal step procedure in which the grievance is taken up by the union on behalf of the employee with management. If a settlement is not reached after the step process is completed, an outside professional arbitrator is selected. The arbitrator then hears the case in a procedure that is less formal than might occur in an outside court, but does involve
    witnesses, evidence, cross examination, briefs, etc. Both sides are able to present evidence and rebuttal. In the case of an employee who has been subject to discipline, the grievance is framed in the context of “just cause.” Was the discipline imposed for just cause? That question starts with whether the alleged infraction occurred and then whether – given all the circumstances – the discipline imposed was appropriate.

    Over many years, the concept of just cause has been developed in arbitration as a kind of common law. Nonetheless, it suggests due process. Relevant would be the thoroughness of the investigation by management, the consideration of available evidence, consistency with past discipline in similar cases, etc. Arbitrators will consider what the union-management contract has to say in terms of procedures
    that must be followed and about the meaning of just cause. The decision of the arbitrator, which could be a voiding or lessening of the discipline imposed or upholding the discipline, is then binding on the parties. Note that managers who have the authority to impose discipline know that it is always possible that their judgments might be tested in the grievance process and could ultimately be reviewed by an outside neutral arbitrator.

    Of course, it is possible to try and reverse labor-management arbitration decisions in the external courts. But courts tend to “defer” to arbitration decisions. There is a practical component to such deferrals. Court caseloads are crowded. If there is an alternative process that incorporates due process, second guessing professional arbitrators is not something that courts would want to do on a regular basis.

    There is a difference, of course, between a labor relations grievance and a complaint of sexual harassment or assault. There is no direct analogy to a labor-management contract in the latter situation. Neither the person making a complaint nor the accused has the equivalent of a union to be a representative in the process. But the key point is the potential – known to all involved – that if the matter is not settled informally or through the step process, there will be an outside neutral reviewer and arbitrator who will make a final decision.

    The fact that there are differences between an employment-related grievance in the union-management context and a complaint of sexual harassment or assault is not a barrier to using an outside arbitrator. In place of the contract and general rules of the workplace are university policies regarding sexual harassment and assault. The process can include permitting both the person making the complaint and the accused to have a representative present as an advisor at every point in the procedure. (Indeed, the university could offer to provide such a representative.) In short, there is nothing that prevents an outside neutral professional from being used as the final decision-maker. Indeed, not using a neutral outsider invites external review of the type universities don’t like – either an Internet fury as in the Kipnis affair or an adverse court decision as in the San Diego case.


  • 13 Jul 2015 8:17 AM | Daniel J.B. Mitchell (Administrator)

    Has the labor market changed since its last business cycle peak in 2007? Between then and now, we had the Great Recession which presumably could have made structural alterations to the way the labor market functions. The most widely used measure of labor market conditions is the official unemployment rate. Unemployment by that index is falling towards levels similar to the last peak as can be seen on the chart below. So, although we are not necessarily at the next peak, are we coming back to “normal”?

    There are other measures of the labor market which suggest that things now are “different” relative to what they were at the 2007 peak. Among them is the job openings rate (or vacancy rate) which is currently above the previous-peak levels even though unemployment is still higher than at prior-peak levels. The chart below illustrates that shift between the prior peak and now. It shows, as just noted, that the job openings rate is higher now than then.


    Moreover, the job openings rate shift seems to have occurred at around the time the Great Recession bottomed out in 2009. That is, it is not just recently that job openings could be viewed as higher than you might have expected given the condition of the labor market. The U.S. Bureau of Labor Statistics (BLS) provides a chart of the so-called “Beveridge curve,” the (inverse) relation between the job openings (vacancy) rate and the unemployment rate. A standard interpretation is that if the Beveridge curve shifts up and to the right on the chart, there is some kind of new inefficiency that has been introduced into the labor market. You can view the curve shift as indicating that it takes more vacancies than “normal” (with “normal” meaning what the relation was before the 2007 peak) to bring the unemployment rate down to any given level.

    But what is the nature of that inefficiency? Whose fault is it? BLS provides a standard interpretation of the curve shift along with its chart [1]:

    The position of the curve is determined by the efficiency of the labor market. For example, a greater mismatch between available jobs and the unemployed in terms of skills or location would cause the curve to shift outward (up and toward the right).

    Although the BLS doesn’t specify the nature of the inefficiency, a standard story is that worker skills don’t match employer needs; worker skills have somehow eroded or become outmoded.

    There is a puzzle to the worker skill mismatch story. While it’s possible for worker skills to become outmoded over time if they are not employed, as with capital depreciation, the effect should take a while to set in. The fancy word for this explanation of depreciating skills is “hysteresis” in the labor market. Whatever you call it, the skill erosion story seems to put the onus for the problem on the supply side (worker side) of the labor market. Workers, the story implies, should update their skills to meet employer needs. When they do, they will get jobs more easily. If you are more liberal in your political orientation, you might alternatively say that we need public programs to subsidize retraining. Workers need retraining, in that view, but government should help them obtain it.

    However, labor markets have two sides. What about the demand side (employer side)? When a sharp recession occurs, there is a period thereafter during which recruitment is easy for employers. Applicants are plentiful. Employers need not do much more than let it be known that jobs are available to have a queue of applicants. That phenomenon – long queues - is a sudden effect that emerges with a sharp recession.

    Unlike unemployed worker skill erosion which occurs over time, no gradual change is involved on the employer side. So it could be that the skill that has eroded is not a worker skill but an employer skill. The lost skill – if that is the right word - is employer aggressiveness in recruitment. Employers, in this alternative story, have forgotten that it is sometimes necessary to reshape jobs to worker needs and skills, and to outbid other employers in terms of pay and conditions.

    If that demand-side explanation doesn’t suit you, here is another story, also on the demand side. Hiring can be loosely considered indefinite or temporary. Temp hiring can be done through an employment agency or directly by an employer. In either form, it puts the new hires on notice that their jobs are of short duration or are explicitly temporary.

    Our labor market data only measure hiring through temp agencies. The data don’t reflect any distinction between temporary or indefinite hires. So if workers are hired directly by employers but with a temporary understanding, we have no measure that distinguishes such hires from “regular” employment. We do know, as the chart below shows, that the proportion of hires through temp agencies is now higher than it was at the previous peak. Temp agency hiring can be taken as a proxy for more temporary hiring in both forms (direct and through agencies).

    If employers have shifted their hiring toward short-duration labor market contracting, perhaps after having experienced the trauma of having to do mass layoffs of regular employees during the Great Recession, one would expect more vacancies now. Short duration hiring means frequently having vacancies as the temp hires are let go and replaced. Put another way, there will be more churning in the labor market which is likely to be associated with more vacancies at any point in time.

    The point of this musing is not to produce a definitive story of why the Beveridge curve, as charted by BLS, has shifted up and to the right. Rather its point is that assuming the explanation is entirely on the supply (worker) side of the labor market is unwarranted. The supply-oriented explanation of outmoded job applicants has implications. It suggests that there is a skill mismatch problem and that the onus is on the worker (perhaps with government assistance) to fix it. One way or another, workers should get themselves retrained. For example, the recent interest in policies to promote tuition-free community college seems linked to such a diagnosis.

    But if what we are observing is a change in employer behavior, an exclusive focus on community college tuition or similar measures is aimed at the wrong target. We know from past experience with high-demand labor markets that employers eventually come up with ways to adapt to the worker supply that is available. In past periods of high demand, employers have boosted their own training efforts. They have bused in workers from more distant areas. They have redesigned jobs. We might start, therefore, by promulgating reminders of such past efforts and by highlighting examples of whatever current efforts in that direction are now occurring.


  • 06 Jul 2015 9:49 AM | Daniel J.B. Mitchell (Administrator)

    From time to time in these musings, I have referred to a paper I wrote back in 1998 – before the Eurozone was fully in place – entitled “Eur-Only as Sovereign as Your Money: California's Lessons for the European Union.”[1] The paper appeared in the June 1998 edition of the UCLA Anderson Business Forecast publication as an excerpt from a longer presentation I made subsequently at a meeting of an international group that took place in Bologna. The theme of the paper – as the title suggests – was that countries joining the Euro-zone were giving up an important element of their macroeconomic policy.

    Specifically, countries joining the Euro-zone were surrendering their conventional monetary policy (control of interest rates) and the ability to change their exchange rate, i.e., to vary their competitive costs of production, relative to those of their major trading partners. In recompense for that loss of control, those countries that joined would get lower cross-border transactions costs and an end to exchange rate risk with other countries within the Euro-zone. So was the upcoming sacrifice worth the benefit? That was the key question and it seemed to me at the time that there was insufficient recognition of the trade-off prospective Eurozone members were facing.

    The paper noted that the State of California was effectively a member of the “dollar-zone” within the U.S. Thus, while benefiting from lower transactions costs and an absence of exchange rate risk with the rest of the U.S., California had no independent state monetary policy. Put another way, California’s monetary policy was effectively in the hands of an external Federal Reserve, the U.S. central bank. And when California experienced a negative shock – the end of the Cold War around 1990 and the resultant decline of its then-large aerospace/military industry – it could not change its exchange rate to facilitate the adjustment relative to the other areas of the U.S.

    There were two results of this lack of economic sovereignty in California. A mild recession in the U.S. in the early 1990s could not be escaped by California since it was part of the overall dollar-zone. And the structural negative shock (end of the Cold War) played out as an ongoing state budget crisis, a decline in employment, an out-migration of those workers displaced by the shock, etc. Californians who were around at that time will recall those developments and adjustments as painful.

    Californians might also recall something else that happened in the aftermath of the downturn of the early 1990s. Just as the state government was adversely affected by the economic shock (reduced tax revenue), so, too, were local governments within California. One of those local governments, Orange County – located just south of Los Angeles – went into bankruptcy in late 1994.[2]

    There is an old joke that you can find out who is swimming naked only when the water recedes. It turned out that Orange County had been engaged in financial speculations which, for a time, provided high returns on investment that helped sustain local services by supplementing tax revenues. But with high returns inevitably comes risk. And one day the economic tides receded and there were big losses rather than high returns for Orange County. The County’s financial misbehavior was exposed and it could not pay all its bills. Some creditors would not be paid on schedule.

    As are the other counties in California, Orange County is run by an elected Board of Supervisors.[3] In 1995, the Board put a proposition on the County ballot asking voters whether they wanted to raise the local sales tax to maintain services and avert bankruptcy-related cuts. The tax was rejected by the electorate. And after some missed debt payments, arrangements were worked out with creditors and eventually the County recovered. There is more to the story, but now you have the general outline.[4]

    The article I referred to at the outset of this musing drew lessons for the impending Euro-zone from what happened at the state level in California in the 1990s. But there are also lessons from what didn’t happen at the local level in California for the current Greek crisis. But let’s start with what did happen. The private sector in Orange County continued its recovery from the larger California recession that developed earlier in the decade as can be seen on the chart below.

    Having an unstable local government that was in financial difficulties was certainly not a plus for the County’s business climate. But those difficulties didn’t create a local recession, either. In particular, there were no financial panics. There were no runs on banks in Orange County. County residents had full access to their bank accounts. They didn’t empty grocery store shelves and hoard food. They didn’t hoard currency. ATMs operated normally. Residents’ credit cards continued to be accepted. Stock markets in the U.S. and around the world did not tremble because Orange County’s government couldn’t pay all its bills.

    Now imagine a very different scenario. Suppose the Federal Reserve had declared in 1994 that if Orange County’s government couldn’t or wouldn’t pay all its debts, the Fed would stop providing the ongoing backup to banks in Orange County that central banks typically provide. Suppose that the Fed had announced that if Orange County’s government could not pay its debts, the County could no longer even be in the dollar-zone and therefore the County would have to introduce its own currency or somehow cope on its own. Clearly, there would have been a more drastic fallout from the Orange County bankruptcy than actually occurred if any such announcement had been made. Surely, there would have been runs on Orange County banks. Since those banks are connected to financial institutions outside Orange County, the panic could easily have spread nationally and even internationally.

    But none of these things happened. In fact, it is unthinkable that the Fed or other central government institutions in the U.S. would take such a position. They simply wouldn’t say that because a local government within the dollar-zone was not meeting its obligations to creditors, all central obligations to maintain financial and general economic stability within that jurisdiction’s private sector would cease. They wouldn’t say that the local jurisdiction would have to create its own currency thereafter. Instead, they would view the residents, banks, and businesses of areas within the dollar-zone as remaining in it, regardless of what their local government authorities might do.

    Indeed, anyone reading this musing would say that my hypothetical story above is ridiculous – because it is ridiculous. I have no idea how the Greek electorate will vote on the planned referendum on the Eurozone’s terms.5 I have no idea what the Greek government may do. And it really doesn’t matter for the purpose of this musing. If the story above seems ridiculous as a policy for U.S. central authorities to have followed in Orange County’s government debacle, why isn’t it a ridiculous policy for Euro-zone central authorities to be following in the Greek debacle?



    [2] Orange County at the time had a population of about two and a half million people. Its population today is over three million and it has a gross product of over $200 billion. See In dollar terms, that gross product is roughly comparable with Greece’s.
    [3] Counties in California’s complicated hierarchy of governments provide health and welfare services, run local jails, and provide services such as police and fire in unincorporated areas (areas that are not part of cities) and in cities that contract with the county of such services.
    [4] See

    [5] Although this musing is dated the day after the planned referendum, it was written before. Mitchell’s Musings are typically dated the Monday of the week in which they appear.

  • 29 Jun 2015 9:19 AM | Daniel J.B. Mitchell (Administrator)

    In last week’s musing, I discussed the issue of academia’s struggles with such matters as “micro-aggressions” and “triggers.”[1] Basically, the message was that university policies around such matters have the tendency either to provoke external ridicule – political correctness gone wild – or abuse by university administrators. Those are the downsides.

    What I did not provide in that musing was what university administrators should do – as opposed to what they shouldn’t do. Of course, sometimes it is enough just to point out what shouldn’t be done. However, university administrators do receive complaints about alleged micro-aggressions and apparently do feel compelled to do something. Furthermore, what got me to thinking further about this matter was a public radio broadcast here in the Los Angeles area which was triggered (pun absolutely intended) by an op ed in the Los Angeles Times by libertarian-leaning UCLA law Prof. Eugene Volokh. His op ed was basically taken from a blog he wrote called the Volokh Conspiracy which is carried by the Washington Post website.[2] As you might expect, Volokh’s op ed and the earlier blog post opposed the micro-aggression training program at the University of California that was mentioned in last week’s musing. Subsequently, the Los Angeles Times published an editorial essentially endorsing the Volokh position.

    The broadcast featured as guests Prof. Volokh and Prof. Derald W. Sue of Teachers College at Columbia University. Volokh essentially repeated the stance taken in his op ed; Prof. Sue took the position that micro-aggressions were real. But he went off track in two ways. First, accepting Sue’s point – it remains unclear that universities should take steps that suggest to students and faculty that individuals who say things that seem innocent on their face (“Where were you born?”), or even controversial or provocative things about public policy (“Affirmative action is racist”), should be threatened with penalties.

    Second, when asked about such statements that are said to be micro-aggressions such as “America is the land of opportunity,” Prof. Sue said that particular statement wasn’t true because not everyone who comes to America has equal opportunity. But there is nothing in the statement “America is the land of opportunity” that requires the interpretation that “America is the land of equal opportunity.” The statement presumably can simply mean that there is more opportunity in America than in some other places (which is presumably a major reason why folks immigrate to the U.S.).

    In effect, the conversation veered off from the topic of whether a specific statement might be taken as a “micro-aggression” and whether something should be done to repress it, if so, to whether the statement was true under Prof. Sue’s interpretation. There are statements that are true – “you’ve gained a lot of weight” – but that might not be the best thing to say to someone. We all know that there are statements in everyday conversation that aren’t necessarily true, but sometimes can be the best thing to say. “You look great” said to someone returning after a long illness may not be strictly true, but it can be encouraging to the recipient.

    The micro-aggression idea is really a subcategory of a large body of work in psychology and other fields – nowadays even in economics – that involves framing and subtle “nudges” that can influence behavior. My favorite example is a presentation I heard a few years ago by a colleague at UCLA describing an experiment. As I recall it, Asian female student volunteers were randomly divided into three groups and given a math test. Before the test was administered, some students were asked a question about what language their parents spoke at home – a reminder of race/ethnic background. Another group was asked if their dorm was co-ed, a reminder of being female. The third control group was asked a neutral question: something about phone access in their dorm.

    The purpose of the experiment was to see what impact the stereotypes of Asians being good at math, but women being poor at math, would have on the results of the math test given to students who fell into both categories (Asian and female). Those students reminded of being Asians scored best. The neutral group came out in the middle. And the group reminded of being female scored the lowest. Apparently, a seemingly-irrelevant question to math produced a behavioral response in math performance.

    The math experiment is only one of many such cases. In fact, there is a substantial literature documenting these types of results. There are behavioral labs in universities devoted to research in this area. But note that the behavioral response to what was done in the math experiment could have positive, as well as negative, outcomes. Doing better than the control group on a math test was surely a positive outcome. Put another way, the students reminded indirectly of being Asian were “triggered” to perform better. Perhaps the students in the group who were indirectly reminded of being female by a seemingly-neutral question could be viewed as being the victims of a “micro-aggression.”

    Clearly, advertising and marketing are longstanding fields entirely devoted to the notion that what is said – even if irrelevant to a rational choice by consumers - can influence behavioral responses. And there has been no secret about that fact for decades. In the 1950s, for example, there was a popular book – The Hidden Persuaders – devoted to informing the public that it was being manipulated in subtle ways that were seen as positive by advertisers but perhaps negatively by the consumers of those advertisements.

    Some forms of insurance are denoted by the bad thing they insure against: fire insurance, collision insurance, flood insurance, earthquake insurance. However, there is a reason why what is called “life insurance” is not called “death insurance” even though such insurance is taken out against the risk of dying. In principle, whether you should buy life/death insurance should have nothing to do with its label. But obviously those who sell insurance think that the label matters and that death is not a popular subject. Similarly, there is a reason why opponents of the estate tax call it the death tax instead. Presumably, the label put on a tax should not affect appropriate choices of fiscal policy. But someone evidently thinks that having a tax with “death” in its name it will influence legislative outcomes.

    So if you are a university administrator, what can you do to improve “campus climate” without getting yourself into situations where you are held up to ridicule or find yourself defending the indefensible? The fact that condemnation of campus anti-micro-aggression/trigger policy excesses is now moving from libertarian blogs to mainstream news sources (such as the LA Times editorial page) should be a warning to university administrators. Mandatory training sessions that resemble authoritarian re-education camps are definitely NOT the way to go. Orwellian investigations without reasonable due process are not the way to go. So what is the way to go?

    Academics like research; that’s what they do. The kinds of behavioral studies such as the math experiment noted above if presented as just that – interesting research studies – would attract the attention of many academics. So why not start by making such research widely available? Such an effort might well attract an audience – even among the more socially-challenged members of the community. It might even influence their behavior. But the emphasis should not be only on the negative – which is the focus of the “micro-aggression” approach - since the research in fact points to a range of responses, positive and negative, to subtle cues and framing. There are micro-encouragements – the research suggests – as well as micro-aggressions.

    Beyond the circulation of research information, administrative interventions should focus on situations of explicit exclusion. Recently, for example, a high-profile scientist mused in public that women in scientific laboratories were trouble because they caused romantic relationships to develop that diverted lab attention from research.[3] That statement was more than a micro-aggression. A university would be remiss in having a person who made public exclusionary remarks of that type as a director of a research center, a department chair, a dean of a school, or heading some other program. Remarks of someone in charge of a university program that he/she doesn’t care to deal with members of group X (race, sex, religion, national origin, etc.) should disqualify that person from a leadership position.

    You don’t need elaborate experiments to see that targeted individuals might well feel unwelcome in a research program, department, or school led by someone making such exclusionary remarks. Universities are bound by law and acceptance of federal funding to ban discrimination on the basis of race, sex, religion, national origin, etc. So administrative action might well be warranted when someone in charge of a university program makes exclusionary remarks of that type since they violate university policy and legal obligation.

    Exclusionary remarks might also work to discourage students from enrolling in someone’s classes. If you say you don’t want to deal with group X or that group X is trouble, surely someone in group X would have to worry about being in your class. But positions on sensitive public policy issues such as affirmative action cannot be taken as exclusionary by themselves unless there is firm evidence that alternative viewpoints are not presented or tolerated in the classroom.

    There has to be a rule of reason applied and, if you are a university administrator, you are going to have to determine what is reasonable, particularly in the light of traditional values of academic freedom. (Sorry, but if you are an administrator, that is what you are paid for – reasonable judgments.) If you need a more definite rule than that, consider this one: As a university administrator, when you move away from explicit statements of exclusion and get into the realm of micro-aggressions, you would be well advised to avoid actions that make yourself the subject of editorial derision.

    [2] The broadcast (“Airtalk” on KPCC) can be heard at

    The original Volokh blog posting is at and his op ed is at

    The LA Times editorial is at


  • 15 Jun 2015 2:51 PM | Emily Smith (Administrator)

    Much was made of a “slowdown” of the economy in the first quarter of 2015. According to the latest release from the U.S. Bureau of Economic Analysis – which is the second estimate for that period – in the first quarter real GDP declined at an annual rate of 0.7% in contrast to an initial report of an annualized increase of 0.2%.[1] The estimate of a decline brought headlines which didn’t always reflect the fact that these are annualized numbers, i.e., the actual decline (if there was one) was only about one fourth of 0.7% per quarter.[2] CEOs are said to have revised down their growth estimates, perhaps after hearing from their corporate economists about the decline.[3] (Question: Are CEOs fully conversant with BEA methodology for producing real estimates from nominal data and then seasonally-adjusting the results? It is likely that most economists who use the BEA data would have to look up the technical details.)

    Real GDP data are typically presented in seasonally-adjusted form so the fact that the winter is a slow period generally shouldn’t affect the numbers. However, the winter was especially severe and therefore the seasonal factors – which are based on historical patterns – possibly understated the effects of bad weather.[4] Then again, as we have noted in prior musings, the methodology for calculating real GDP from the nominal figures is arcane. So we are talking about questionable seasonal factors imposed on arcane and preliminary estimates.

    Given the uncertainties, let’s look at the labor market and see if anything really exciting has occurred in early 2015. Below is a chart of twelve-month percentage nonfarm payroll employment growth. It is based on actual employment levels (not seasonally-adjusted levels) compared with the same month a year earlier.[5] So if winter was extra-severe in 2015 (which it undoubtedly was), this method of dealing with the seasonal effect will understate the impact of weather.

    When you look at the chart, you see a slight slippage in the employment growth rate in early 2015. But even with the slippage – which may be entirely a weather-related event – the growth in employment was above the level we have seen earlier in the recovery. If we look only at the private sector, growth rates on a 12-month basis are a bit higher than when government is included. Nonetheless, except for a brief period back in early 2012, growth rates were generally below those so far in 2015.

    If we look at the employment-to-population ratio from the Current Population Survey (Household Survey), the results are similar. The improving trend continued into early 2015 and in fact was stronger in that period than in much of the earlier recovery. In short, when you look at the labor market and put the numbers into perspective, it appears that there really wasn’t a meaningful slowdown. At most there was a snow-down, CEO pessimism notwithstanding.

    What matters in terms of employment and the general direction of the economy is the underlying trend, not short-term noise. Blips up and down are often just statistical aberrations and should not be heralded as a structural shift. The most recent UCLA Anderson Forecast for the U.S. economy as an example tells a story of a long and painful deviation from capacity due to the Great Recession and a slow recovery thereafter. As the chart below from that Forecast indicates, at current rates of (projected) growth, it will take another couple of years to get back to where we should be. Of course, it is possible that some unforeseen events could intervene before we get there. But nothing fundamental happened in the first quarter of 2015 that should change that perception.


    [2] The headline in the New York Times was “U.S. Economy Contracted 0.7% in First Quarter.” See The article’s text noted that the estimates were annualized.
    [4] It has been argued that there are other problems with the winter seasonal adjustment. See
    [5] All employment data cited are from the U.S. Bureau of Labor Statistics.

  • 08 Jun 2015 8:26 AM | Daniel J.B. Mitchell (Administrator)

    An item in the news caught my eye dealing with the Federal Reserve and “transparency.” Specifically, a column appeared in the Los Angeles Times – apparently reflecting a similar column in the Wall Street Journal – indicating that you shouldn’t rely on Federal Reserve chairs for stock market advice.[1] The column noted that after former Fed chair Alan Greenspan made his famous “irrational exuberance” warning, you could have lost a lot of money if you had pulled your investments at that time from the stock market, even taking account of subsequent events. One such event was the dot-com bust and another was the later housing bust. Nonetheless, timing is everything; if you sold at the bottom you might have lost but if you held on to your shares you would have come out ahead.

    The column goes on to note that current Fed chair Janet Yellen has noted high valuations of some techy sectors of the stock market but those shares also went up after she sounded a cautionary note. Pay no attention to the Fed chair seems to be the lesson to be drawn. But is that so?

    There is the old bit of (useless) advice that you should “buy low” and “sell high” to make money. Indeed, a quick Google search failed to reveal even an attribution of that quote – because it is so commonly used to illustrate how it really tells you nothing. The advice is useless since it fails to tell you about timing – when exactly is the market low and when is it high? By itself, it is as unhelpful as advising you that when betting on a horse race, you should always bet on the winning horse.

    Useless though they are, both buy low/sell high and bet-on-the-winning horse are bits of micro advice. They may be useless but they are aimed at individual investors or betters. Fed chairs don’t purport to be micro advisors who are competing with stock brokers or race track tip sheets. They do have macro concerns, however. Sometimes bubbles and the bursting of those bubbles can be costly to the overall economy. The dot-com/boom/bust provoked a mild recession. On a regional basis in my home state of California, its impact was greater than in the U.S. as a whole, leading to – among other effects – a state budget crisis and the recalling of a governor.

    The housing bubble/burst, on the other hand, produced the Great Recession. By many measures, particularly in the labor market, we are still not fully recovered from that recession. So surely, Fed chairs have legitimate concerns about such episodes. Their concerns are not whether you as an individual could make money by a long-term buy-and-hold stock strategy but what happens to the general economy when there are panics and market collapses.

    It may well be that making statements about irrational exuberance is not especially effective at avoiding such exuberance and that it has no effect on financial busts. It may be – in fact it likely is – the case that statements by Fed chairs are no more useful at telling you at the micro level how to time investments than telling you to buy low/sell high. But it seems wrong to imply that Fed chairs shouldn’t openly share their macro policy concerns in public forums. Surely, those folks who call for more “transparency” at the Fed would be unhappy with such a position. What could be more transparent than a Fed chair sharing her policy concerns?

    If Janet Yellen is expressing nervousness about stock market valuations, she could be telling you that she is leaning toward raising interest rates to cool things down. It’s likely that she is not alone in such leanings among Fed policy makers. She might be wrong in her evaluation about the market. But now you know how she – a significant player in monetary policy – feels. Whether you want to sell your stocks on hearing what she has to say is your business. Whether your investment choices should be made on that basis is your affair. But whether the Fed raises interest rates is everyone’s business because of the macro effects, even the business of those who have no stock market investments and are just doing such real world things as seeking employment.



12/30/13 12/24/12 12/26/11
12/23/13 12/17/12 12/19/11
12/15/13 12/10/12 12/12/11
12/9/13 12/1/12 12/5/11
12/2/13 11/26/12 11/28/11
11/25/13 11/19/12 11/21/11
11/18/13 11/12/12 11/7/11
11/11/13 11/5/12 10/31/11
11/4/13 10/29/12 10/24/11
10/28/13 10/22/12 10/17/11
10/21/13 10/15/12 10/10/11
10/14/13 10/8/12 10/3/11
10/7/13 10/1/12 9/26/11
9/30/13 9/24/12 9/19/11
9/22/14 9/23/13 9/17/12 9/12/11
 9/14/159/15/14 9/16/13 9/10/12 9/5/11
9/8/14 9/9/13 9/3/12 8/29/11
9/1/14 9/2/13 8/27/12 8/22/11
8/25/14 8/26/13 8/20/12 8/15/11
8/18/14 8/19/13 8/13/12 8/8/11
8/11/14 8/12/13 8/6/12 7/25/11
8/4/2014 8/5/13 7/30/12 7/18/11
7/28/14 7/29/13 7/23/12 7/11/11
7/21/14 7/22/13 7/16/12 7/4/11
7/14/14 7/15/13 7/9/12 6/20/11
7/7/14 7/8/13 7/2/12 6/13/11
6/30/14 7/1/13 6/25/12 6/6/11
6/23/14 6/24/13 6/18/12 5/30/11
6/16/14 6/17/13 6/11/12 5/23/11
6/9/14 6/10/13 6/4/12 5/16/11
6/2/14 6/3/13 5/28/12 5/9/11

5/27/13 5/21/12 5/2/11

5/20/13 5/14/12 4/25/11

5/13/13 5/7/12 4/18/11

5/6/13 4/30/12 4/11/11

4/29/13 4/23/12 4/4/11

4/22/13 4/16/12 3/28/11

4/15/13 4/9/12 3/21/11

4/8/13 4/2/12 3/14/11

4/1/13 3/26/12 3/7/11

3/25/13 3/19/12 2/28/11

3/18/13 3/12/12 2/21/11

3/11/13 3/5/12 2/14/11

3/4/13 2/27/12 2/7/11

2/25/13 2/20/12 1/31/11
 2/16/152/17/14 2/18/13 2/13/12 1/24/11
2/10/14 2/11/13 2/6/12 12/15/10
2/3/14 2/4/13 1/30/12 12/9/10
1/27/14 1/28/13 1/23/12
1/20/14 1/21/13 1/16/12
1/13/14 1/14/13 1/9/12
1/6/14 1/7/13 1/2/12
Employment Policy Research Network (A member-driven project of the Labor and Employment Relations Association)

121 Labor and Employment Relations Bldg.


121 LER Building

504 East Armory Ave.

Champaign, IL 61820


The EPRN began with generous grants from the Rockefeller, Russell Sage, and Ewing Marion Kauffman Foundations


Powered by Wild Apricot. Try our all-in-one platform for easy membership management