It's impossible to explain Fordism in a neoclassical model. Instead,
we have to turn to Ronald Coase, Karl Marx, and Herbert Simon. The
Marx-Coase-Simon model is that employment is not an agreement to do X
work for Y pay, but instead the submission of the employee to the
authority of the worker for some period of time.
Contracts requiring employees to work their hardest are impossible --
how do you prove in court that an employee was slacking? And contracts
giving the worker all of the money they make are impractical. So we
can model this using the contingent renewal we discussed in the last
chapter.
Employers pick a wage and a monitoring level that maximizes output. At
the end of every period, they fire the monitored workers who don't
appear to be performing well. This encourages employees to increase
their level of effort to the point where they don't get fired (at
least some of them, it depends how much employees hate the work and so
on).
How do employers pick a wage and monitoring level? How do employees
decide how much to work? Well, they can do so through trial and error.
Also, the choice of technology needs to be considered since different
technologies adjust the ease of monitoring (e.g. _I Love Lucy_-style
assembly lines that can be mechanically sped up). But this means that
technology will not be chosen purely on productivity grounds but
employers will factor in its effect on labor discipline. [Historian
David F. Noble writes about examples of this. --AS]
Even if we just focus on monitoring, we notice the normal economic
laws don't apply. 1) Labor markets won't clear (there are people who
want to work for me but I don't hire them because monitoring them is
costly). 2) The equilibrium is Pareto inefficient (we'd both be better
off if my employees worked harder and I monitored them less, but
that's not enforceable). 3) Employers spend money on people who don't
make anything (the monitors).
But most interesting is that the result is socially inefficient. As an
employer, I can choose between paying my workers more and monitoring
them more closely. To me, there's no difference -- I'm either paying
monitors or I'm paying workers, but I pay either way. But paying
monitors is a social waste; society would be better off if everyone
was doing productive work. (This is why Bowles doesn't like calling
this the "efficiency wage" model -- it's actually an inefficient
system.)
Let's broaden this model to the entire economy. In equilibrium, firms
will make zero profit (if there was profit to be made, new firms would
enter and take it; if there were losses, firms would leave). Since
firms decide all their factors to maximize profit, that just leaves
how much workers insist on being paid. The minimum wage a worker
requires will be enough to make it not worth searching for another
job, which means it depends on how many jobs are out there (more jobs
and they'll demand more). But the cheaper workers are, the more firms
there will be, which makes workers even cheaper, which means that even
this basic choice by the worker is determined by the requirement that
firms make zero profit.
Now let's go back to the point that this system is Pareto inefficient.
How do we deal with that? One solution is for the worker to take out a
loan, buy the assets of the company, work as hard as he can for
himself (no need to monitor if you're your own boss) and use the extra
money he makes to pay back the loan. But this only works if he's the
sole employee, not risk-averse, and can get large loans. (Risk averse
employees might not work for themselves even if they were given the
company.)
Another option is for the employees to unionize and monitor each
other. The union and management can then bargain how to split the
gains from cooperation, the union promising more effort and management
promising more wages. But this agreement isn't enforceable either,
since the union can always blame low output on something that isn't
their fault. However, the two could play the tit-for-tat strategy --
working hard as long as the pay is high, paying high as long as they
work hard. And, in fact, this is basically what a work-to-rule strike
is: if employers don't pay enough, workers just slow down and follow
all the rules, lowering output. (This is doable as long as per-period
pay, the chance of getting fired, or time-discounting isn't too high.)
And indeed, we do see a split in the real-world between nice
workplaces with high wages and steady employment (often unionized) and
low-paid crummy jobs with high turnover. Larry Summers once argued the
second-type was perfectly efficient, but given the high levels of
unemployment among the kind of people looking for low-wage work, it
seems more likely those enterprises just haven't figured out good ways
to bargain. What makes the first kind more common? Well, unionization,
obviously, but also macroeconomic stabilization policies that lower
the chance of job loss.
Now we come to an idea that my econ grad student friend and I came up
with when discussing the last chapter: why don't employees just pay
employers for their job? Go back to the wine example from the previous
chapter: the wine company wants to prove to you that they have
high-quality wine so they can make you into a long-term customer. So
why don't they pay you to try their wine? And, in fact, they do: thus
free samples. [All this is --AS]
Why not do the same thing with jobs? If you paid the employer for your
job, it wouldn't affect how hard you worked but it would make you less
eager to lose it. (Assume the employer is prevented, by law or
reputation, from firing you right after you pay the fee.) This totally
changes things: the job fee makes jobs so unattractive that the labor
market now clears ("This result underlines an important limitation of
labor market clearing as a policy objective: if jobs are made
sufficiently unattractive there may be no excess demand."), wages are
raised to pay back the fee, and money is transferred from worker to
employer. In our general equilibrium model, firms and employment would
rise.
So why aren't job fees aren't too widespread in reality? The most
likely explanation is that making jobs so expensive makes workers
pissed off at their employers which makes them work less hard. This is
apparently why employers don't cut wages in a recession. And in
experiments people do indeed work less hard with job fees than without
them. It's also possible that workers don't trust employers not to
fire them after they pay the job fee.
You might say that it's silly to look at this theory since it's so at
odds with reality, but if we add in social preferences for reciprocity
(from ch. 3), we can see it is consistent. It just goes to highlight
how important reciprocal feelings are in labor markets.
But surely lots of models can be considered consistent in this sense
-- why did we pick this one? A number of its predictions are true,
unlike in the classical model. First, people like having their jobs.
People who lose their job lose a great deal of money -- even during
the roaring 1990s, they lost between 50-150% of a year's salary. And
that's not even counting how bad unemployed people feel -- one
estimate is that the hurt to self-esteem is about as big as losing
$60,000. Second, wages go up when employment goes up. (In the
classical model, wages go _down_ when employment goes up, since you're
forced to begin hiring less efficient employees.) Third, employers
spend a lot of time monitoring employees. Fourth, effort is quite
variable -- paying people by how much they do consistently causes them
to do more.
There are three big takeaways. 1) Since choice of production
technology depends on the benefits from monitoring workers, and since
those benefits depend on how important it is that workers keep their
job, production technology depends on social choices like unemployment
insurance. Thus technology and institutions co-evolve. Take trucking
as an example. Trucking employees prefer to drive fast and take longer
breaks, but this costs their employer more since trucks use more fuel
when going fast. So many companies used to hire truckers who owned
their own trucks, since owner-operators would have to pay the fuel
costs themselves. Then new technology came along that allowed the
trucking company to record the speed of a truck and look at it later.
They used this to force their employees to drive slower and killed off
the owner-operator market. The technology was chosen purely because it
made it easier to monitor workers.
This is a clear example, but imagine the speed recorders also helped
the company coordinate the trucks' movement better. In such a
scenario, how much of the cost of the recorders is "transaction costs"
and how much is it just new technology making people more productive?
Thus the notion of "transaction costs" becomes very confused.
Increasing wages can sometimes increase output. Are wages just
transaction costs? (This is why Bowles doesn't like the term
transaction costs.)
2) The workplace is a cultural environment that shapes people's
preferences and beliefs and friends. Melvin Kohn has consistently
found that people who can be self-directed while at work end up being
self-directed in many other areas of life (including when they raise
their children), and end up being more optimistic and trusting and
have higher self-esteem.
3) Norms of wage fairness evolve and change.
Next week: credit.