Capitalists
simply conjure up two of the unknown numbers and use them to compute the third.
Nitzan Bichler - 2012 - Capital as Power
Over the years, many were happy to side with John Maynard Keynes, whose opinion, expressed somewhat tongue in cheek, was that capitalists value stocks not in relation to what they expect earnings to be, but recursively, based on what they expect other investors to expect:
. . . professional investment may be likened to those newspaper competi- tions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the com- petitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitions, all of whom are looking at the problem from the same point of view. . . . We have reached the third degree where we devote our intelligence to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees.
(Keynes 1936: 156)
This infinite regress indeed seems persuasive when one focuses on the trading pit or looks at the day-to-day gyrations of the market. But it does not sit well with long-term facts. In Figure 11. 1, asset prices for the S&P 500 companies are shown to oscillate around earnings, and similar patterns can be observed when examining the history of individual stocks over a long enough period of time.
So we have two different vantage points: a promiscuous short-term perspective, according to which asset prices reflect Keynes-like recursive expectations; and a disciplined long-term viewpoint, which suggests that these expectations, whatever their initial level, eventually converge to actual earnings. Expressed in terms of Equations (3) and (4), the two views mean that the hype coefficient, however arbitrary in the short or medium run, tends to revert to a long-term mean value of 1.
Now, recall that hype is the ratio of expected earnings to earnings (EE/E), whereas the above impressions are based on the ratio of capitalization to earn- ings (K/E). The latter number reflects both hype and the discount rate (K/E = H/r), so unless we know what capitalists expect, we remain unable to say anything specific about hype. But we can speculate.
Suppose that there are indeed large and prolonged fluctuations in hype. Clearly, these fluctuations would be crucial for understanding capitalism: the bigger their magnitude, the more amplified the movement of capitalization and the greater its reverberations throughout the political economy. Now, assume further that the movements of hype are not only large and prolonged, but also fairly patterned. This situation would open the door for 'insiders' to practically print their own money and therefore to try to manipulate hype
Elementary particles 191
to that end. Hype would then bear directly on power, making its analysis even more pertinent for our purpose.
What do we mean by 'insiders'? The conventional definition refers to a capitalist who knows something about future earnings that other capitalists do not. Typical examples would be a KKR partner who is secretly orches- trating a big leveraged buyout, a Halliburton executive who is about to sign a new contract with the Department of Defense, or a JPMorgan-Chase finan- cier who has been discretely informed of an imminent Fed-financed bailout of Bear Stearns. This exclusive knowledge gives insiders a better sense of whether the asset in question is under- or over-hyped; and this confidence allows them to buy assets for which earning expectations fall short of 'true' earnings - and wait. Once their private insight becomes public knowledge, the imminent rise of hype pushes up the price and makes them rich. 7
These insiders are largely passive: they take a position expecting a change in hype. There is another type, though, less known but far more potent: the active insider. This type is doubly distinctive. First, it knows not only how to identify hype, but also how to shape its trajectory. Second, it tends to operate not individually, but in loosely organized pacts of capitalists, public officials, pundits and assorted 'opinion makers'. The recent US sub-prime scam, for example, was energized by a coalition of leading banks, buttressed by polit- ical retainers, eyes-wide-shut regulators, compliant rating agencies and a cheering chorus of honest-to-god analysts. The active insiders in the scheme leveraged their positions - and then stirred the capitalist imagination and frothed the hype to amplify their gains many times over.
The more sophisticated insiders can also print money on the way down. By definition, a rise in hype inflates the fortunes of outsiders who unknowingly happened to ride the bandwagon. This free ride, though, is not all that bad for insiders. Since hype is a cyclical process, its reversion works both ways. And so, as the upswing builds momentum and hype becomes excessive, those 'in the know' start selling the market short to those who are not in the know. Eventually - and if need be with a little inside push - the market tips. And as prices reverse direction, the short-positioned insiders see their fortunes swell as fast as the market sinks. Finally, when the market bottoms, the insider starts accumulating under-hyped assets so that the process can start anew.
These cyclical exploits, along with their broader consequences, are written in the annals of financial euphoria and crises - from the Tulip Mania of the seventeenth century and the Mississippi and South Sea schemes of the eighteenth century, to the 'new-economy' miracle of the twentieth century and the sub-prime bubble of recent times. The histories of these episodes - and countless others in between - are highly revealing. They will tell you how
7 This method should not be confused with so-called 'value investing'. The latter tactics, immortalized by Graham and Dodd's Security Analysis (1934), also involve buying cheap assets; but what constitutes 'cheap' in this case is a matter of interpretation rather than exclu- sive insight into facts.
? 192 Capitalization
huge fortunes have been made and many more lost. They will teach you the various techniques of public opinion making, rumour campaigns, orches- trated promotion and Ponzi schemes. And they will introduce you to the leading private investors, corporate coalitions and government organs whose art of delusion has helped stir the greed and fear of capitalists, big and small. 8
However, there is one thing these stories cannot tell you, and that is the magnitude of hype. In every episode, investors were made to expect prices to go up or down, as the case may be. But price is not earnings, and as long as we do not know much about the earnings projections of capitalists, we remain ignorant of hype, even in retrospect.
Random noise
This factual void has enabled orthodox theorists to practically wipe the hype and eliminate the insiders. Granted, few deny that earnings expectations can be wrong, but most insist they cannot be wrong for long. Whatever the errors, they are at worst temporary and always random. And since hype is transitory and never systematic, it leaves insiders little to prey on and therefore no ability to persist.
The argument, known as the 'efficient market hypothesis', was formalized by Eugene Fama (1965; 1970) as an attempt to explain why financial markets seem to follow what Maurice Kendall (1953) called a 'random walk' - i. e. a path that cannot be predicted by its own history. The logic can be summa- rized as follows. At any point in time, asset prices are assumed 'optimal' in the sense of incorporating all available information pertaining to the capital- izing process. Now, since current prices are already 'optimal' relative to current knowledge, the arrival of new knowledge creates a mismatch. An unexpected announcement that British Petroleum has less oil reserves than previously reported, for example, or that the Chinese government has reversed its promise to enforce intellectual property rights, means that earlier profit expectations were wrong. And given that expectations have now been revised in light of the new information, asset prices have to be 're-optimized' accordingly.
Note that, in this scheme, truly new information is by definition random; otherwise, it would be predictable and therefore already discounted in the price. So if markets incorporate new information 'efficiently' - i. e. correctly and promptly - it follows that price movements must look as random as the new information they incorporate. And since ('technical analysis' notwith- standing) current price movements do seem random relative to their past moments, the theorist can happily close the circle and conclude that this must be so because new information is being discounted 'efficiently'. 9
8 For some notable histories, see Mackay (1841), Kindelberger (1978) and Galbraith (1990).
9 This first draft of the financial constitution is often softened by various amendments, partic- ularly to the definition of information and to the speed at which the market incorporates it.
? Elementary particles 193
There is a critical bit that needs to be added to this story, though. As it stands, the presumed efficiency of the asset market hangs crucially on the existence of 'smart money' and its hired experts. The reason is obvious. Most individual investors are blissfully unaware of new developments that are 'relevant' to earnings, few can appreciate their implications, and even fewer can do so accurately and quickly. However, since any mismatch between new information and existing prices is an unexploited profit opportunity, investors all have an incentive to obtain, analyse and act on this new infor- mation. And given that they themselves are ill equipped for the job, they hire financial analysts and strategists to do it for them.
These analysts and strategists are the engineers of market efficiency. They have access to all available information, they are schooled in the most up-to- date models of economics and finance, and there are enough of them in the beehive to find and eliminate occasional mistakes in judgement. The big corporations, the large institutional investors, the leading capitalists - 'smart money' - all employ their services. Individual investors' folly is 'smart money's opportunity. By constantly taking advantage of what others do not know, the pundits advertise their insight and keep the market on an efficient keel. And since by definition no one knows more than they do, there is nobody left to systematically outsmart the market. This, at any rate, is the official theology.
Flocks of experts and the inefficiency of markets
The problem is with the facts. As noted, until recently nothing much was known about expectations and hype, so the theory could never be put to the test. But the situation has changed. In 1971, a brokerage firm named Lynch, Jones and Ryan (LJR) started to collect earning estimates made by other brokers. The initial coverage was modest in scope and limited in reach. It consisted of projections by 34 analysts pertaining to some 600 individual firms, forecasts that LJR summarized and printed for the benefit of its own clients. But the service - known as the Institutional Brokers Estimate System, or IBES - expanded quickly and by the 1980s became a widely used electronic
According to Fischer Black (1986), the news always comes in two flavours: information and noise. Information is something that is relevant to 'theoretical value' (read true value), while noise is everything else. Unfortunately, since, as Black acknowledges, true value can never be observed, there is no way to tell what is 'relevant', and therefore no way to separate infor- mation from noise. And since the two are indistinguishable, everyone ends up trading on a mixture of both. Naturally, this mixture makes the theory a bit fuzzy, but Black is unde- terred. To keep the market equilibrated, he loosens the definitions. An efficient market, he states, is one in which prices move within a 'factor of 2' of true value: i. e. between a high that is twice the (unknowable) magnitude of value and a low that is half its (unknowable) size. In his opinion, this definition of efficiency holds 90 per cent of the time in 90 per cent of the markets - although he concedes that these limits are not cast in stone and can be tailored to the expert's own likings (p. 533).
? 194 Capitalization
data provider. The system currently tracks the forecasts of some 90,000 analysts and strategists worldwide, regarding an array of corporate income statements and cash flow items. The forecasts cover both individual firms and broad market indices and are projected for different periods of time - from the next quarter through to the vaguely defined 'long term'. The estimates go back to 1976 for US-based firms and to 1987 for international companies and market indices.
And so, for the first time since the beginning of discounting more than half a millennium ago, there is now a factual basis to assess the pattern and accu- racy of expert projections. This new source of data has not been lost on the experts. Given that any new information is a potential profit opportunity, along with IBES there emerged a bourgeoning 'mini-science of hype': a systematic attempt to foretell the fortune tellers. 10
So far, the conclusions of this mini-science hardly flatter the forecasters and seriously damn their theorists. In fact, judging by the efficacy of esti- mates, the efficient market hypothesis should be shelved silently. It turns out that analysts and strategists are rather wasteful of the information they use. Their forecast errors tend to be large, persistent and very similar to those of their peers. They do not seem to learn from their own mistakes, they act as a herd, and when they do respond to circumstances, their adjustment is pain- fully lethargic.
A recent comprehensive study of individual analyst forecasts by Guedj and Bouchaud (2005) paints a dismal picture. The study covers 2812 corporate stocks in the United States, the European Union, the United Kingdom and Japan, using monthly data for the period 1987-2004. Of its many findings, three stand out. First, the average forecast errors are so big that even a simple 'no-change' projection (with future earnings assumed equal to current levels) would be more accurate. Second, the forecasts are not only highly biased, but also skewed in the same direction: looking twelve months ahead, the average analyst overestimates the earnings of a typical corporation by as much as 60 per cent! (if analysts erred equally in both directions, the average error would be zero). Although the enthusiasm cools down as the earning announcement date gets closer, it remains large enough to keep the average forecast error as high as 10 per cent as late as one month before the reports are out. Finally, and perhaps most importantly, the projections are anything but random. The dispersion of forecasts among the analysts is very small - measuring between 1/3rd and 1/10th the size of their forecast errors. This difference suggests, in line with Keynes, that analysts pay far more attention to the changing senti- ment of other analysts than to the changing facts.
Behavioural theorists of finance often blame these optimistic, herd-like projections on the nature of the analyst's job. The analysts, they argue, tend to forge non-arm's-length relationships with the corporations they cover, and this intimacy leads them to 'err' on the upside. Moreover, the analysts'
? 10 For an extensive annotated bibliography on earnings forecasts, see Brown (2000).
Elementary particles 195
preoccupation with individual corporate performance causes them to lose sight of the broader macro picture, creating a blank spot that further biases their forecast.
These shortcomings are said to be avoided by strategists. Unlike analysts who deal with individual firms, strategists examine broad clusters of corpora- tions, such as the S&P 500 or the Dow Jones Industrial Average. They also use different methods. In contrast to the analysts who build their projections from the bottom up, based on company 'fundamentals', strategists construct theirs from the top down, based on aggregate macroeconomic models spiced up with political analysis. Finally, being more detached and closely attuned to the overall circumstances supposedly makes them less susceptible to cogni- tive biases.
Yet this approach does not seem very efficient either. Darrough and Russell (2002) compare the performance of bottom-up analysts to top-down strategists in estimating next year's earnings per share for the S&P 500 and Dow Jones Industrial Average over the period 1987-99. 11 They show that although strategists are less hyped than analysts, their estimates are still very inaccurate and path dependent. They are also far more lethargic than analysts in revising their forecasts. Being locked into their macro models, they often continue to 'project' incorrect results retroactively, after the earnings have already been reported! The appendix to this chapter examines the temporal pattern of strategist estimates. It demonstrates not only that their forecast errors are very large, but that they follow a highly stylized, cyclical pattern. Their hype cycle is several times longer than the forecast period itself, and its trajectory is systematically correlated with the direction of earnings.
Let there be hype
And so the Maginot Line of market efficiency crumbles. The analysts and strategists know full well that 'it is better for reputation to fail conventionally than to succeed unconventionally', as Keynes once put it (1936: 158). Consequently, rather than ridding each other of the smallest of errors, they much prefer the trotted path of an obedient flock. Ironically, this preference is greatly strengthened by the fact that most of them actually believe in market efficiency. Ultimately, the market must be right, and since it is their recom- mendations that keep the market on track, it follows that to deviate from their own consensus is to bet against the house. Better to run with the herd.
11 Bottom-up projections for each index are constructed in two stages: first by averaging for each individual company in the index the estimates of the different analysts, yielding the company's 'consensus forecast'; and then by computing the weighted average of these consensus forecasts, based on the relative size of each company in the index. The top-down consensus forecasts for each index are obtained by averaging the projections of the different strategists.
? 196 Capitalization
This inherent complacency, amplified by the folly of so-called 'dumb money', means that there is no built-in 'mechanism' to stop the insiders. In fact, the very opposite is the case. Since the experts tend to move in a flock, it is enough to influence or co-opt those who lead (the mean estimate) in order to shift the entire pack (the distribution of estimates). And the temptation to do so must be enormous. Fluctuations in hype can be several times larger than the growth of actual earnings, so everything else being equal, a dollar invested in changing earning expectations could yield a return far greater than a dollar spent on increasing the earnings themselves.
Pressed to the wall, mainstream finance responded to these anomalies by opening the door to various theories of 'irrationality' - from Herbert Simon's 'bounded rationality' (1955; 1979), through Daniel Ellsberg's 'ambiguity aversion' (1961), to Daniel Kahneman and Amos Tversky's 'prospect theory' (1979), to Richard Thaler's broader delineation of 'behavioural finance' (De Bondt and Thaler 1985). These explanations, though, remain safely within the consensus. Like their orthodox counterparts, they too focus on the powerless individual who passively responds to given circumstances. Unlike his nineteenth-century predecessor, this 'agent' is admittedly imperfect. He is no longer fully informed and totally consistent, he tends to harbour strange preferences and peculiar notions of utility (and may even substitute 'satis- ficing' for 'maximizing'), and he sometimes lets his mood cloud his better judgement.
These deviations, argue their theorists, fly in the face of market efficiency: they show that irrational hype can both exist and persist. But that conclusion, the theorists are quick to add, does not bring the world to an end. As noted in Chapter 10, individual irrationality, no matter how rampant, is assumed to be bounded and therefore predictable. And since predictable processes, no matter how irrational, can be modelled, the theorists can happily keep their jobs.
Of course, what the models cannot tell us (and the financial modellers are careful never to ask) is how these various 'irrationalities' are being shaped, by whom, to what ends and with what consequences. These aspects of capital accumulation have nothing to do with material technology and individual utility. They are matters of organized power. And on this subject, finance theorists and capitalist insiders are understandably tight-lipped. The only way to find out is to develop a radical political economy of hype independent of both.
The discount rate
If putting a number on future income and wealth seems difficult, knowing how much to trust one's prediction is next to impossible - or, at least that is how it was for much of human history. When Croesus, the fabulously rich king of Lydia, asked Solon of Athens if 'ever he had known a happier man than he', the latter refused to be impressed by the monarch's present wealth:
Elementary particles 197
The gods, O king, have given the Greeks all other gifts in moderate degree; and so our wisdom, too, is a cheerful and a homely, not a noble and kingly wisdom; and this, observing the numerous misfortunes that attend all conditions, forbids us to grow insolent upon our present enjoy- ments, or to admire any man's happiness that may yet, in course of time, suffer change. For the uncertain future has yet to come, with every possible variety of fortune; and him only to whom the divinity has continued happiness unto the end, we call happy; to salute as happy one that is still in the midst of life and hazard, we think as little safe and conclusive as to crown and proclaim as victorious the wrestler that is yet in the ring.
(Plutarch 1859, Vol. 1: 196-97, emphasis added)
Solon's caution was not unfounded, for in due course the hubristic Croesus lost his son, wife and kingdom. And in this respect, we can say that little has changed. The future is still uncertain, but the capitalist rulers, like their royal predecessors, continue to convince themselves that somehow they can circum- vent this uncertainty. The main difference is in the methods they use. In pre-capitalist times uncertainty was mitigated by the soothing words of astrologists and prophets, whereas nowadays the job is delegated to the oracles of probability and statistics.
Capitalist uncertainty is built right into the discounting formula. To see why, recall our derivation of this formula in Equations (1) to (6) in Chapter 9. We started by defining the rate of return (r) as the ratio of the known earnings stream (E) to the known dollar value of the invested capital (K), such that r = E/K. The expression is straightforward. It has one equation, one unknown and an obvious solution. Next, we rearranged the equation. Since the rate of interest can be calculated on the basis of the earnings and the original invest- ment, it follows that the original investment can be calculated based on the rate of return and the earnings, so that K = E/r. The result is the discount formula, the social habit of thinking with which capitalists began pricing their capital in the fourteenth century.
Mathematically, the two formulations seem identical, if not circular (recall the Cambridge Controversy). But in reality there is a big difference between them. The first expression is ex post. It computes the realized rate of return based on knowing both the initial investment and the subsequent earnings. The second expression is ex ante. It calculates the present value of capital based on the future magnitude of earnings. These future earnings, however, cannot be known in advance. Furthermore, since capitalists do not know their future earnings, they cannot know the rate of return these earnings will eventually represent. Analytically, then, they are faced with the seemingly impossible task of solving one equation with three unknowns.
In practice, of course, that is rarely a problem.
Capitalists simply conjure up two of the unknown numbers and use them to compute the third. The question for us is how they do it and what the process means for accumula-
198 Capitalization
tion. The previous section took us through the first step: predicting future earnings. As we saw, these predictions are always wrong. But we also learned that the errors are not unbounded, and that, over a sufficiently long period of time, the estimates tend to oscillate around the actual numbers. The second step, to which we now turn, is articulating the discount rate - the rate that the asset is expected to yield with the forecasted earnings. And it turns out that the two steps are intimately connected. The discount rate mirrors the confi- dence fortunetelling capitalists have in their own forecasts: the greater their uncertainty, the higher the discount rate - and vice versa.
The normal and the risky
What is the 'proper' discount rate? The answer has a very long history, dating back to Mesopotamia in the third millennium BCE (a topic to which we return in the next chapter). 12 Conceptually, the computation has always involved two components: a 'benchmark' rate plus a 'deviation'. The meaning of these two components, though, has changed markedly over time.
Until the emergence of capitalization in the fourteenth century, both components were seen as a matter of state decree, sanctioned by religion and tradition, and modified by necessity. The nobility and clergy set the just lending rates as well as the tolerated zone of private divergence, and they often kept them fixed for very long periods of time (Hudson 2000a, 2000b).
Neoclassicists never tire of denying this 'societal' determination. Scratch the pre-capitalist surface, they insist, and underneath you will find the eternal laws of economics. From the ancient civilizations and early empires, to the feudal world, to our own day and age, the underlying logic has always been the same: the productivity of capital determines the 'normal' rate of return, and the uncertainty of markets determines the 'deviations' from that normal.
This confidence seems unwarranted. We have already seen that the neoclassical theory of profit is problematic, to put it politely. But even if the theory were true to the letter, it would still be difficult to fathom how its purely capitalist concepts could possibly come to bear on a pre-capitalist discount rate. First, prior to the emergence of capitalization in the fourteenth century the productivity doctrine was not simply unknown; it was unthink- able. Second, there were no theoretical tools to conceive, let alone quantify, uncertainty. And, finally, there were no systematic data on either produc- tivity or uncertainty to make sense of it all. In this total blackout, how could anyone calculate the so-called 'economic' discount rate?
12 There is considerable recent literature on the ancient origins of interest, debt and money. These contrarian writings, partly inspired by the work of Mitchell-Innes (1913; 1914), critique the undue imposition of neoclassical logic on pre-capitalist societies and instead emphasize a broader set of political, religious and cultural determinants. Important collec- tions include Hudson and Van de Mieroop (2002), Hudson and Wunsch (2004), Ingham (2004) and Wray (2004).
? Probability and statistics
These concepts have become meaningful only since the Renaissance. The turning point occurred in the seventeenth century, with the twin invention of probability and statistics. 13 In France, Blaise Pascal and Pierre de Ferma, mesmerized by the abiding logic of a game of chance, began to articulate the mathematical law of bourgeois morality. Probability was justice. In the words of Pascal, 'the rule determining that which will belong to them [the players] will be proportional to that which they had the right to expect from fortune. . . [T]his just distribution is known as the division' (cited in Bernstein 1996: 67, emphases added). 14
At about the same time, Englishmen John Graunt, William Petty and Edmund Halley took the first steps in defining the field of practical statistics. The term itself connotes the original goal: to collect, classify and analyse facts bearing on matters of state. And indeed, Graunt, whose 1662 estimate of the population of London launched the scientific art of sampling, was very much attuned to the administrative needs of the emerging capitalist order. His prac- tical language would have been music to the ears of today's chief executives and finance ministers:
It may be now asked, to what purpose tends all this laborious buzzling and groping? . . . I Answer. . . That whereas the Art of Governing, and the true Politiques, is how to preserve the Subject in Peace, and Plenty, that men study onely that part of it, which teacheth how to supplant, and over-reach one another, and how, not by fair out-running, but by trip- ping up each other's heels, to win the Prize. Now, the Foundation, or Elements of this honest harmless Policy is to understand the Land, and the hands of the Territory to be governed, according to all their intrin- sick, and accidental differences. . . . It is no less necessary to know how many people there be of each Sex, State, Age, Religious, Trade Rank, or Degree, &c. by the knowing whereof Trade and Government may be made more certain, and Regular; for, if men know the People as afore- said, they might know the consumption they would make, so as Trade might not be hoped for where it is impossible.
(Graunt 1662: 72-73, original emphases)
Although initially independent, probability and statistics were quickly inter- twined, and in more than one way. The new order of capitalism unleashed multiple dynamics that amplified social uncertainty. Instead of the stable and
13 The social history of these related disciplines is told in Hacking (1975; 1990) and Bernstein (1996). Our account here draws partly on their works.
14 Probability theory in fact was developed a century earlier, by the Italian mathematician Girolamo Cardano. His work, however, was ahead of the times and therefore largely ignored.
Elementary particles 199
? 200 Capitalization
clear hierarchies of feudalism came a new ethic of autonomous individualism and invisible market forces. The slow cycle of agriculture gave rise to bustling industrial cities and rapidly growing populations. The relatively simple struc- tures of personal loyalty succumbed to the impersonal roller coaster of accumulation and the complex imperatives of government finances and regulations. More and more processes seemed in flux. But then, with every- thing constantly changing, how could one tell fact from fiction? What was the yardstick for truth on the path to societal happiness and personal wealth?
The very same difficulty besieged the new sciences of nature. In every field, from astronomy and physics to chemistry and biology, there was an explo- sion of measurement. But the measurements rarely turned out to be the same - so where was truth? With so many 'inaccuracies', how could one pin down the ultimate laws of nature?
The solution, in both society and science, came from marrying logical probability with empirical statistics. According to this solution, truth is hidden in the actual statistical facts, and probability theory is the special prism through which the scientist can see it. Any one measurement may be in error. But when the errors are random they tend to cancel each other out, and if we increase the size of the sample we can get as close to the truth as we wish. Moreover, and crucially for our purpose here, probability theory can also tell us how wrong our pronouncement of truth is 'likely' to be. It tosses the al-zahr - Arabic for 'dice' - to reckon the hazards.
This marriage of logic and measurement changed the concept of the unknown, making it seem less intimidating. Of course, the fear is still very much there: 'Unless you are running scared all the time, you're gone', explains the quintessential forward-looking capitalist, Bill Gates (1994). But the unknown, having been mediated through probability and statistics, has become less mysterious and, in that sense, less menacing. For the first time in history, uncertainty has been given a shape: it has a 'distribution'. Probability and statistics draw a clear relationship between the 'normal' and the 'disper- sion' around it, between what is supposedly 'natural' and 'true' and what is 'distorted' and 'devious', between the rulers at the 'centre' and the rebels and radicals at the 'margins'. They translate the unknown into seemingly precise 'standard deviations', and by so doing give human beings a comforting 'measure of their ignorance'.
The effect of this newly found confidence has been nothing short of revo- lutionary. It has opened the door to massive advances in the natural sciences. Virtually every field - from geodesy and astronomy, to classical and quantum statistical mechanics, to the biostatistics of evolution and medicine - has been rewritten by the new technique. And the same has happened in political economy. Every aspect of capitalism - from insurance, to engineering, to production, salesmanship, finance, public management, weapon develop- ment, population control, health care, mass psychology, the media and education, to name a few - has been re-articulated and further developed to leverage the power of probability and statistics. The belief that one can at
Elementary particles 201
least sketch the unknown has encouraged social imitative and intellectual creativity. The sense of knowing the 'odds' has made it much easier to dare to take a risk.
Averting risk: the Bernoullian grip
For the running-scared capitalist, though, probability and statistics are a mere starting point. They pretend to give the odds - but the odds alone are still devoid of meaning. And that is where utilitarianism comes into the picture.
The issue can be illustrated with a simple example. Suppose Bill Gates considers acquiring one of two software companies, Civilsoft and Weaponsoft. Civilsoft sells in the open market and is a bit volatile. The analysts tell Gates that, in their view, it has a 50 per cent chance of generating annual earnings of $50 million and a 50 per cent chance of generating annual earnings of $150 million. Weaponsoft is different. It sells to the military and has recently managed to secure a long term contract with the U. S. Department of Defense. According to the analysts, it is certain to generate $100 million annually. Now, probability calculations make the two firms equally attractive: mathematically, both have expected annual earnings of $100. 15 And, so, if Gates believes his analysts he should be indifferent as to which of the two he should acquire.
Not so, argued Daniel Bernoulli (1738). In his seminal paper, published more than two centuries before Gates was born, he stipulated that the measurement of risk involves more than the mere statistical odds. It requires that we put a 'moral' judgement on the expected dollars and cents - a judge- ment that he insisted must be based on diminishing marginal utility.
According to this logic, Gates, like the rest of us, should contemplate not the expected dollar earnings the companies will generate, but the expected utility he will get from consuming those earnings. This modification makes a big difference. '[A]ny increase in wealth no matter how insignificant', wrote Bernoulli, 'will always result in an increase in utility which is inversely propor- tionate to the quantity of goods already possessed' (p. 25). So the first dollar Gates earns generates more utility than the second, the second more than the third, and so on all the way to the billionth dollar and beyond.
To illustrate the consequence of this stipulation, let us split the expected earnings into $50 million chunks and assume for simplicity that with dimin- ishing marginal utility the first chunk gives Gates 3,000 utils, the second 2,000 utils and the third 1,000 utils. With this assumption, the takeover targets no longer look equally attractive: the less risky Weaponsoft is expected to generate 5,000 utils, whereas the more volatile Civilsoft is likely to give only
15 Mathematically, the expected earnings of Civilsoft are: $50 million * 0. 5 + $150 million * 0. 5 = $100 million, the same as Weaponsoft's.
? 202 Capitalization
4,500. 16 And since ultimately all Mr Gates cares about is hedonic consump- tion, it is better for him to acquire the military contractor. It is likely to give him 500 more utils per annum.
Finance theory has never managed to shake loose of the Bernoulli grip. His paper triggered a deluge of publications on risk, many of which modified and revised his original formulation. But most remain locked behind his three subjectivist tenets. First, risk ultimately is a personal matter. Second, attitude to risk is rooted in the individual's hedonic preferences. And third, because of diminishing marginal utility, most individuals tend to be risk averse. This grip keeps the risk analysis of contemporary capitalist power hostage to the eighteenth-century belief in individual utilitarianism.
The unknowable
Of course, most theorists of capitalism ignore power. So before continuing we should point out that Bernoulli's mechanical hedonism may be inappropriate for the study of risk quite apart from the absence of power. First, there is the question of the odds. Capitalists are concerned with the future, yet statistical estimates of probabilities can only be drawn from the past. This is a crucial mismatch. As David Hume's Treatise of Human Nature (1739) tells us, the mere fact that all past experiments have found water to boil at 100 degrees Celsius does not mean that the same will happen next time we put the kettle on the stove. Natural scientists have managed to assume this challenge away by stipulating the stability of natural laws (whether deterministic or stochastic), but this stipulation seems a bit stretched when applied to society.
The inherent difficulty of calculating the social odds was heightened during the first half of the twentieth century. The combined onslaught of revo- lutions, financial crises, a Great Depression and two world wars suggested that the problem was not merely one of assigning odds to possible outcomes, but of specifying what those outcomes might be in the first place.
According to Frank Knight (1921), risk calculations presuppose a known set of odds. But in society, the future contains an element of novelty, and novelty cannot be pre-assigned a probability: it is unique and therefore inher- ently uncertain. Even Keynes, whose belief in the existence of so-called objec- tive social probability survived the First World War, caved in after the Second. In matters of society, he confessed, the future is largely unknowable:
By 'uncertain' knowledge, let me explain, I do not mean merely to distin- guish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of
16 For Weaponsoft, the expected utility is the sum of 3,000 utils for the first $50 million chunk and 2,000 utils for the second. For Civilsoft, the computation is: 3,000 utils * 0. 5 + (3,000 utils + 2,000 utils + 1,000 utils) * 0. 5.
? ? Elementary particles 203
a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth-owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.
(Keynes 1937: 213-14, emphasis added)
And then there is the second problem. Even if we convince ourselves that the mathematical odds exist and that we can somehow know them, there is still the task of assigning to these odds utilitarian weights. Without these weights, there is no point talking about Bernoullian risk. Yet these weights, made up of utils, vary from person to person and from moment to moment, and this fluidity has implications. Any given asset must be seen as having not one, but many quantities of risk (as many as there are potential capitalists). Furthermore, the quantity of risk, being partly subjective, will change with preferences even if the so-called objective odds remain unaltered. This ever- shifting multiplicity makes it difficult to pin down the 'correct' risk premium and therefore to specify the 'proper' discount rate. And with this rate hanging in the air, how are capitalists to compute an asset's 'true' present value?
The capital asset pricing model
These logical challenges proved no match for the capitalist nomos. Although investors may be unable to calculate risk on their own, they can ask the know- all market to do it for them. All they need is a bureaucratic blueprint disguised as theory, and Lord Keynes was prescient enough to anticipate what it would take to produce one. His checklist was short: (1) believe that the present odds are a reliable guide to future ones; (2) assume that other investors got those odds right; and (3) conclude that their relevant computations are already reflected in asset prices (Keynes 1937: 214). The instructions were simple enough, and when a year later Paul Samuelson (1938) announced that prices reveal to us what we desire but cannot express ('revealed preferences'), the road for an operational theory of risk was finally wide open.
The glory went to Harry Markowitz and William Sharpe. Markowitz (1952; 1959) gave investors a quantitative definition of risk and told them how to 'optimize' risk and return through diversification. Sharpe (1964), building on Markowitz's insight, showed capitalists how to tease out of the market the 'true' risk premium with which to discount their assets. These contributions closed the circle. The capitalization ritual was now fully articu- lated, and the two inventors went on to collect the Sveriges Prize in Economic Sciences in Memory of Alfred Nobel.
204 Capitalization
Portfolio selection
Markowitz's manuals focused on the Bernoullian individual: the risk-averse investor. In buying and selling financial assets, he said, 'the investor does (or should) consider expected return a desirable thing and variance of return an undesirable thing' (1952: 77, original emphasis). And the best method to achieve both goals, he concluded, is to diversify.
Although Markowitz himself spoke merely of the 'variance' of returns - defined as the squared deviation of the rate of price change from its own mean value - the term was quickly adopted as a synonym for risk. This in itself was a major achievement. Until Markowitz, there was no quantitative definition for risk, let alone one that everyone agreed on. 17 So the fact that he was able to galvanize the 'investment community' around this concept - even if he never intended to - is already worth a Nobel.
But Markowitz did much more than that. By showing why risk should be handled through diversification, he provided the justification for an old practice and helped underwrite the new trend of institutional investing. To illustrate his logic, consider a portfolio comprising different financial assets. If the market prices of these assets do not move completely in tandem (so that the correlations between their rates of change are less than 1), their unique fluctuations will partly offset one another. This partial offsetting has a great benefit: it causes the price volatility of the portfolio as a whole to be smaller than the average volatility of the individual assets. By owning a portfolio of different assets, therefore, the capitalist can enjoy their average return while suffering less than their average 'risk'. Diversification, it now seemed, offered an entirely free lunch.
Which portfolio should the capitalist own? Conceptually, it is possible to plot on a two-dimensional chart the return/variance attributes of all possible portfolios. Of these endless combinations, there is a subset that Markowitz identified as 'efficient'. These are the best deals. Each efficient portfolio offers the minimum variance for a given return - or, alternatively, the maximum return for a given variance. The only way to do better on one attribute is to give up on the other, and vice versa. Conveniently, all efficient portfolios lie on a well-defined 'efficient frontier', and the Bernoullian capitalist simply needs to pick the one that equilibrates her very own greed and fear.
A few years after Markowitz made his mark, James Tobin (1958) offered an even sweeter deal. If investors are able to borrow and lend at a 'risk-free' rate of interest (such as the rate on US T-bills) they can in fact outperform the efficient frontier. All it takes is two easy steps. First, they need to single out on the efficient frontier that particular portfolio (labelled M for convenience) which, when combined with borrowing or lending, yields the highest return
17 Ricciardi (2004) managed to collate a list of no less than 150 unique risk indicators - hardly an indication of unanimity.
? Elementary particles 205
for every level of volatility. And then they make their move. Those who are more risk averse can invest part of their money in M, putting the rest of it into risk-free assets (i. e. lending it to the central bank). And those who are less risk averse can borrow at the risk-free interest rate and invest the extra cash in additional units of M.
. . . professional investment may be likened to those newspaper competi- tions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the com- petitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitions, all of whom are looking at the problem from the same point of view. . . . We have reached the third degree where we devote our intelligence to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees.
(Keynes 1936: 156)
This infinite regress indeed seems persuasive when one focuses on the trading pit or looks at the day-to-day gyrations of the market. But it does not sit well with long-term facts. In Figure 11. 1, asset prices for the S&P 500 companies are shown to oscillate around earnings, and similar patterns can be observed when examining the history of individual stocks over a long enough period of time.
So we have two different vantage points: a promiscuous short-term perspective, according to which asset prices reflect Keynes-like recursive expectations; and a disciplined long-term viewpoint, which suggests that these expectations, whatever their initial level, eventually converge to actual earnings. Expressed in terms of Equations (3) and (4), the two views mean that the hype coefficient, however arbitrary in the short or medium run, tends to revert to a long-term mean value of 1.
Now, recall that hype is the ratio of expected earnings to earnings (EE/E), whereas the above impressions are based on the ratio of capitalization to earn- ings (K/E). The latter number reflects both hype and the discount rate (K/E = H/r), so unless we know what capitalists expect, we remain unable to say anything specific about hype. But we can speculate.
Suppose that there are indeed large and prolonged fluctuations in hype. Clearly, these fluctuations would be crucial for understanding capitalism: the bigger their magnitude, the more amplified the movement of capitalization and the greater its reverberations throughout the political economy. Now, assume further that the movements of hype are not only large and prolonged, but also fairly patterned. This situation would open the door for 'insiders' to practically print their own money and therefore to try to manipulate hype
Elementary particles 191
to that end. Hype would then bear directly on power, making its analysis even more pertinent for our purpose.
What do we mean by 'insiders'? The conventional definition refers to a capitalist who knows something about future earnings that other capitalists do not. Typical examples would be a KKR partner who is secretly orches- trating a big leveraged buyout, a Halliburton executive who is about to sign a new contract with the Department of Defense, or a JPMorgan-Chase finan- cier who has been discretely informed of an imminent Fed-financed bailout of Bear Stearns. This exclusive knowledge gives insiders a better sense of whether the asset in question is under- or over-hyped; and this confidence allows them to buy assets for which earning expectations fall short of 'true' earnings - and wait. Once their private insight becomes public knowledge, the imminent rise of hype pushes up the price and makes them rich. 7
These insiders are largely passive: they take a position expecting a change in hype. There is another type, though, less known but far more potent: the active insider. This type is doubly distinctive. First, it knows not only how to identify hype, but also how to shape its trajectory. Second, it tends to operate not individually, but in loosely organized pacts of capitalists, public officials, pundits and assorted 'opinion makers'. The recent US sub-prime scam, for example, was energized by a coalition of leading banks, buttressed by polit- ical retainers, eyes-wide-shut regulators, compliant rating agencies and a cheering chorus of honest-to-god analysts. The active insiders in the scheme leveraged their positions - and then stirred the capitalist imagination and frothed the hype to amplify their gains many times over.
The more sophisticated insiders can also print money on the way down. By definition, a rise in hype inflates the fortunes of outsiders who unknowingly happened to ride the bandwagon. This free ride, though, is not all that bad for insiders. Since hype is a cyclical process, its reversion works both ways. And so, as the upswing builds momentum and hype becomes excessive, those 'in the know' start selling the market short to those who are not in the know. Eventually - and if need be with a little inside push - the market tips. And as prices reverse direction, the short-positioned insiders see their fortunes swell as fast as the market sinks. Finally, when the market bottoms, the insider starts accumulating under-hyped assets so that the process can start anew.
These cyclical exploits, along with their broader consequences, are written in the annals of financial euphoria and crises - from the Tulip Mania of the seventeenth century and the Mississippi and South Sea schemes of the eighteenth century, to the 'new-economy' miracle of the twentieth century and the sub-prime bubble of recent times. The histories of these episodes - and countless others in between - are highly revealing. They will tell you how
7 This method should not be confused with so-called 'value investing'. The latter tactics, immortalized by Graham and Dodd's Security Analysis (1934), also involve buying cheap assets; but what constitutes 'cheap' in this case is a matter of interpretation rather than exclu- sive insight into facts.
? 192 Capitalization
huge fortunes have been made and many more lost. They will teach you the various techniques of public opinion making, rumour campaigns, orches- trated promotion and Ponzi schemes. And they will introduce you to the leading private investors, corporate coalitions and government organs whose art of delusion has helped stir the greed and fear of capitalists, big and small. 8
However, there is one thing these stories cannot tell you, and that is the magnitude of hype. In every episode, investors were made to expect prices to go up or down, as the case may be. But price is not earnings, and as long as we do not know much about the earnings projections of capitalists, we remain ignorant of hype, even in retrospect.
Random noise
This factual void has enabled orthodox theorists to practically wipe the hype and eliminate the insiders. Granted, few deny that earnings expectations can be wrong, but most insist they cannot be wrong for long. Whatever the errors, they are at worst temporary and always random. And since hype is transitory and never systematic, it leaves insiders little to prey on and therefore no ability to persist.
The argument, known as the 'efficient market hypothesis', was formalized by Eugene Fama (1965; 1970) as an attempt to explain why financial markets seem to follow what Maurice Kendall (1953) called a 'random walk' - i. e. a path that cannot be predicted by its own history. The logic can be summa- rized as follows. At any point in time, asset prices are assumed 'optimal' in the sense of incorporating all available information pertaining to the capital- izing process. Now, since current prices are already 'optimal' relative to current knowledge, the arrival of new knowledge creates a mismatch. An unexpected announcement that British Petroleum has less oil reserves than previously reported, for example, or that the Chinese government has reversed its promise to enforce intellectual property rights, means that earlier profit expectations were wrong. And given that expectations have now been revised in light of the new information, asset prices have to be 're-optimized' accordingly.
Note that, in this scheme, truly new information is by definition random; otherwise, it would be predictable and therefore already discounted in the price. So if markets incorporate new information 'efficiently' - i. e. correctly and promptly - it follows that price movements must look as random as the new information they incorporate. And since ('technical analysis' notwith- standing) current price movements do seem random relative to their past moments, the theorist can happily close the circle and conclude that this must be so because new information is being discounted 'efficiently'. 9
8 For some notable histories, see Mackay (1841), Kindelberger (1978) and Galbraith (1990).
9 This first draft of the financial constitution is often softened by various amendments, partic- ularly to the definition of information and to the speed at which the market incorporates it.
? Elementary particles 193
There is a critical bit that needs to be added to this story, though. As it stands, the presumed efficiency of the asset market hangs crucially on the existence of 'smart money' and its hired experts. The reason is obvious. Most individual investors are blissfully unaware of new developments that are 'relevant' to earnings, few can appreciate their implications, and even fewer can do so accurately and quickly. However, since any mismatch between new information and existing prices is an unexploited profit opportunity, investors all have an incentive to obtain, analyse and act on this new infor- mation. And given that they themselves are ill equipped for the job, they hire financial analysts and strategists to do it for them.
These analysts and strategists are the engineers of market efficiency. They have access to all available information, they are schooled in the most up-to- date models of economics and finance, and there are enough of them in the beehive to find and eliminate occasional mistakes in judgement. The big corporations, the large institutional investors, the leading capitalists - 'smart money' - all employ their services. Individual investors' folly is 'smart money's opportunity. By constantly taking advantage of what others do not know, the pundits advertise their insight and keep the market on an efficient keel. And since by definition no one knows more than they do, there is nobody left to systematically outsmart the market. This, at any rate, is the official theology.
Flocks of experts and the inefficiency of markets
The problem is with the facts. As noted, until recently nothing much was known about expectations and hype, so the theory could never be put to the test. But the situation has changed. In 1971, a brokerage firm named Lynch, Jones and Ryan (LJR) started to collect earning estimates made by other brokers. The initial coverage was modest in scope and limited in reach. It consisted of projections by 34 analysts pertaining to some 600 individual firms, forecasts that LJR summarized and printed for the benefit of its own clients. But the service - known as the Institutional Brokers Estimate System, or IBES - expanded quickly and by the 1980s became a widely used electronic
According to Fischer Black (1986), the news always comes in two flavours: information and noise. Information is something that is relevant to 'theoretical value' (read true value), while noise is everything else. Unfortunately, since, as Black acknowledges, true value can never be observed, there is no way to tell what is 'relevant', and therefore no way to separate infor- mation from noise. And since the two are indistinguishable, everyone ends up trading on a mixture of both. Naturally, this mixture makes the theory a bit fuzzy, but Black is unde- terred. To keep the market equilibrated, he loosens the definitions. An efficient market, he states, is one in which prices move within a 'factor of 2' of true value: i. e. between a high that is twice the (unknowable) magnitude of value and a low that is half its (unknowable) size. In his opinion, this definition of efficiency holds 90 per cent of the time in 90 per cent of the markets - although he concedes that these limits are not cast in stone and can be tailored to the expert's own likings (p. 533).
? 194 Capitalization
data provider. The system currently tracks the forecasts of some 90,000 analysts and strategists worldwide, regarding an array of corporate income statements and cash flow items. The forecasts cover both individual firms and broad market indices and are projected for different periods of time - from the next quarter through to the vaguely defined 'long term'. The estimates go back to 1976 for US-based firms and to 1987 for international companies and market indices.
And so, for the first time since the beginning of discounting more than half a millennium ago, there is now a factual basis to assess the pattern and accu- racy of expert projections. This new source of data has not been lost on the experts. Given that any new information is a potential profit opportunity, along with IBES there emerged a bourgeoning 'mini-science of hype': a systematic attempt to foretell the fortune tellers. 10
So far, the conclusions of this mini-science hardly flatter the forecasters and seriously damn their theorists. In fact, judging by the efficacy of esti- mates, the efficient market hypothesis should be shelved silently. It turns out that analysts and strategists are rather wasteful of the information they use. Their forecast errors tend to be large, persistent and very similar to those of their peers. They do not seem to learn from their own mistakes, they act as a herd, and when they do respond to circumstances, their adjustment is pain- fully lethargic.
A recent comprehensive study of individual analyst forecasts by Guedj and Bouchaud (2005) paints a dismal picture. The study covers 2812 corporate stocks in the United States, the European Union, the United Kingdom and Japan, using monthly data for the period 1987-2004. Of its many findings, three stand out. First, the average forecast errors are so big that even a simple 'no-change' projection (with future earnings assumed equal to current levels) would be more accurate. Second, the forecasts are not only highly biased, but also skewed in the same direction: looking twelve months ahead, the average analyst overestimates the earnings of a typical corporation by as much as 60 per cent! (if analysts erred equally in both directions, the average error would be zero). Although the enthusiasm cools down as the earning announcement date gets closer, it remains large enough to keep the average forecast error as high as 10 per cent as late as one month before the reports are out. Finally, and perhaps most importantly, the projections are anything but random. The dispersion of forecasts among the analysts is very small - measuring between 1/3rd and 1/10th the size of their forecast errors. This difference suggests, in line with Keynes, that analysts pay far more attention to the changing senti- ment of other analysts than to the changing facts.
Behavioural theorists of finance often blame these optimistic, herd-like projections on the nature of the analyst's job. The analysts, they argue, tend to forge non-arm's-length relationships with the corporations they cover, and this intimacy leads them to 'err' on the upside. Moreover, the analysts'
? 10 For an extensive annotated bibliography on earnings forecasts, see Brown (2000).
Elementary particles 195
preoccupation with individual corporate performance causes them to lose sight of the broader macro picture, creating a blank spot that further biases their forecast.
These shortcomings are said to be avoided by strategists. Unlike analysts who deal with individual firms, strategists examine broad clusters of corpora- tions, such as the S&P 500 or the Dow Jones Industrial Average. They also use different methods. In contrast to the analysts who build their projections from the bottom up, based on company 'fundamentals', strategists construct theirs from the top down, based on aggregate macroeconomic models spiced up with political analysis. Finally, being more detached and closely attuned to the overall circumstances supposedly makes them less susceptible to cogni- tive biases.
Yet this approach does not seem very efficient either. Darrough and Russell (2002) compare the performance of bottom-up analysts to top-down strategists in estimating next year's earnings per share for the S&P 500 and Dow Jones Industrial Average over the period 1987-99. 11 They show that although strategists are less hyped than analysts, their estimates are still very inaccurate and path dependent. They are also far more lethargic than analysts in revising their forecasts. Being locked into their macro models, they often continue to 'project' incorrect results retroactively, after the earnings have already been reported! The appendix to this chapter examines the temporal pattern of strategist estimates. It demonstrates not only that their forecast errors are very large, but that they follow a highly stylized, cyclical pattern. Their hype cycle is several times longer than the forecast period itself, and its trajectory is systematically correlated with the direction of earnings.
Let there be hype
And so the Maginot Line of market efficiency crumbles. The analysts and strategists know full well that 'it is better for reputation to fail conventionally than to succeed unconventionally', as Keynes once put it (1936: 158). Consequently, rather than ridding each other of the smallest of errors, they much prefer the trotted path of an obedient flock. Ironically, this preference is greatly strengthened by the fact that most of them actually believe in market efficiency. Ultimately, the market must be right, and since it is their recom- mendations that keep the market on track, it follows that to deviate from their own consensus is to bet against the house. Better to run with the herd.
11 Bottom-up projections for each index are constructed in two stages: first by averaging for each individual company in the index the estimates of the different analysts, yielding the company's 'consensus forecast'; and then by computing the weighted average of these consensus forecasts, based on the relative size of each company in the index. The top-down consensus forecasts for each index are obtained by averaging the projections of the different strategists.
? 196 Capitalization
This inherent complacency, amplified by the folly of so-called 'dumb money', means that there is no built-in 'mechanism' to stop the insiders. In fact, the very opposite is the case. Since the experts tend to move in a flock, it is enough to influence or co-opt those who lead (the mean estimate) in order to shift the entire pack (the distribution of estimates). And the temptation to do so must be enormous. Fluctuations in hype can be several times larger than the growth of actual earnings, so everything else being equal, a dollar invested in changing earning expectations could yield a return far greater than a dollar spent on increasing the earnings themselves.
Pressed to the wall, mainstream finance responded to these anomalies by opening the door to various theories of 'irrationality' - from Herbert Simon's 'bounded rationality' (1955; 1979), through Daniel Ellsberg's 'ambiguity aversion' (1961), to Daniel Kahneman and Amos Tversky's 'prospect theory' (1979), to Richard Thaler's broader delineation of 'behavioural finance' (De Bondt and Thaler 1985). These explanations, though, remain safely within the consensus. Like their orthodox counterparts, they too focus on the powerless individual who passively responds to given circumstances. Unlike his nineteenth-century predecessor, this 'agent' is admittedly imperfect. He is no longer fully informed and totally consistent, he tends to harbour strange preferences and peculiar notions of utility (and may even substitute 'satis- ficing' for 'maximizing'), and he sometimes lets his mood cloud his better judgement.
These deviations, argue their theorists, fly in the face of market efficiency: they show that irrational hype can both exist and persist. But that conclusion, the theorists are quick to add, does not bring the world to an end. As noted in Chapter 10, individual irrationality, no matter how rampant, is assumed to be bounded and therefore predictable. And since predictable processes, no matter how irrational, can be modelled, the theorists can happily keep their jobs.
Of course, what the models cannot tell us (and the financial modellers are careful never to ask) is how these various 'irrationalities' are being shaped, by whom, to what ends and with what consequences. These aspects of capital accumulation have nothing to do with material technology and individual utility. They are matters of organized power. And on this subject, finance theorists and capitalist insiders are understandably tight-lipped. The only way to find out is to develop a radical political economy of hype independent of both.
The discount rate
If putting a number on future income and wealth seems difficult, knowing how much to trust one's prediction is next to impossible - or, at least that is how it was for much of human history. When Croesus, the fabulously rich king of Lydia, asked Solon of Athens if 'ever he had known a happier man than he', the latter refused to be impressed by the monarch's present wealth:
Elementary particles 197
The gods, O king, have given the Greeks all other gifts in moderate degree; and so our wisdom, too, is a cheerful and a homely, not a noble and kingly wisdom; and this, observing the numerous misfortunes that attend all conditions, forbids us to grow insolent upon our present enjoy- ments, or to admire any man's happiness that may yet, in course of time, suffer change. For the uncertain future has yet to come, with every possible variety of fortune; and him only to whom the divinity has continued happiness unto the end, we call happy; to salute as happy one that is still in the midst of life and hazard, we think as little safe and conclusive as to crown and proclaim as victorious the wrestler that is yet in the ring.
(Plutarch 1859, Vol. 1: 196-97, emphasis added)
Solon's caution was not unfounded, for in due course the hubristic Croesus lost his son, wife and kingdom. And in this respect, we can say that little has changed. The future is still uncertain, but the capitalist rulers, like their royal predecessors, continue to convince themselves that somehow they can circum- vent this uncertainty. The main difference is in the methods they use. In pre-capitalist times uncertainty was mitigated by the soothing words of astrologists and prophets, whereas nowadays the job is delegated to the oracles of probability and statistics.
Capitalist uncertainty is built right into the discounting formula. To see why, recall our derivation of this formula in Equations (1) to (6) in Chapter 9. We started by defining the rate of return (r) as the ratio of the known earnings stream (E) to the known dollar value of the invested capital (K), such that r = E/K. The expression is straightforward. It has one equation, one unknown and an obvious solution. Next, we rearranged the equation. Since the rate of interest can be calculated on the basis of the earnings and the original invest- ment, it follows that the original investment can be calculated based on the rate of return and the earnings, so that K = E/r. The result is the discount formula, the social habit of thinking with which capitalists began pricing their capital in the fourteenth century.
Mathematically, the two formulations seem identical, if not circular (recall the Cambridge Controversy). But in reality there is a big difference between them. The first expression is ex post. It computes the realized rate of return based on knowing both the initial investment and the subsequent earnings. The second expression is ex ante. It calculates the present value of capital based on the future magnitude of earnings. These future earnings, however, cannot be known in advance. Furthermore, since capitalists do not know their future earnings, they cannot know the rate of return these earnings will eventually represent. Analytically, then, they are faced with the seemingly impossible task of solving one equation with three unknowns.
In practice, of course, that is rarely a problem.
Capitalists simply conjure up two of the unknown numbers and use them to compute the third. The question for us is how they do it and what the process means for accumula-
198 Capitalization
tion. The previous section took us through the first step: predicting future earnings. As we saw, these predictions are always wrong. But we also learned that the errors are not unbounded, and that, over a sufficiently long period of time, the estimates tend to oscillate around the actual numbers. The second step, to which we now turn, is articulating the discount rate - the rate that the asset is expected to yield with the forecasted earnings. And it turns out that the two steps are intimately connected. The discount rate mirrors the confi- dence fortunetelling capitalists have in their own forecasts: the greater their uncertainty, the higher the discount rate - and vice versa.
The normal and the risky
What is the 'proper' discount rate? The answer has a very long history, dating back to Mesopotamia in the third millennium BCE (a topic to which we return in the next chapter). 12 Conceptually, the computation has always involved two components: a 'benchmark' rate plus a 'deviation'. The meaning of these two components, though, has changed markedly over time.
Until the emergence of capitalization in the fourteenth century, both components were seen as a matter of state decree, sanctioned by religion and tradition, and modified by necessity. The nobility and clergy set the just lending rates as well as the tolerated zone of private divergence, and they often kept them fixed for very long periods of time (Hudson 2000a, 2000b).
Neoclassicists never tire of denying this 'societal' determination. Scratch the pre-capitalist surface, they insist, and underneath you will find the eternal laws of economics. From the ancient civilizations and early empires, to the feudal world, to our own day and age, the underlying logic has always been the same: the productivity of capital determines the 'normal' rate of return, and the uncertainty of markets determines the 'deviations' from that normal.
This confidence seems unwarranted. We have already seen that the neoclassical theory of profit is problematic, to put it politely. But even if the theory were true to the letter, it would still be difficult to fathom how its purely capitalist concepts could possibly come to bear on a pre-capitalist discount rate. First, prior to the emergence of capitalization in the fourteenth century the productivity doctrine was not simply unknown; it was unthink- able. Second, there were no theoretical tools to conceive, let alone quantify, uncertainty. And, finally, there were no systematic data on either produc- tivity or uncertainty to make sense of it all. In this total blackout, how could anyone calculate the so-called 'economic' discount rate?
12 There is considerable recent literature on the ancient origins of interest, debt and money. These contrarian writings, partly inspired by the work of Mitchell-Innes (1913; 1914), critique the undue imposition of neoclassical logic on pre-capitalist societies and instead emphasize a broader set of political, religious and cultural determinants. Important collec- tions include Hudson and Van de Mieroop (2002), Hudson and Wunsch (2004), Ingham (2004) and Wray (2004).
? Probability and statistics
These concepts have become meaningful only since the Renaissance. The turning point occurred in the seventeenth century, with the twin invention of probability and statistics. 13 In France, Blaise Pascal and Pierre de Ferma, mesmerized by the abiding logic of a game of chance, began to articulate the mathematical law of bourgeois morality. Probability was justice. In the words of Pascal, 'the rule determining that which will belong to them [the players] will be proportional to that which they had the right to expect from fortune. . . [T]his just distribution is known as the division' (cited in Bernstein 1996: 67, emphases added). 14
At about the same time, Englishmen John Graunt, William Petty and Edmund Halley took the first steps in defining the field of practical statistics. The term itself connotes the original goal: to collect, classify and analyse facts bearing on matters of state. And indeed, Graunt, whose 1662 estimate of the population of London launched the scientific art of sampling, was very much attuned to the administrative needs of the emerging capitalist order. His prac- tical language would have been music to the ears of today's chief executives and finance ministers:
It may be now asked, to what purpose tends all this laborious buzzling and groping? . . . I Answer. . . That whereas the Art of Governing, and the true Politiques, is how to preserve the Subject in Peace, and Plenty, that men study onely that part of it, which teacheth how to supplant, and over-reach one another, and how, not by fair out-running, but by trip- ping up each other's heels, to win the Prize. Now, the Foundation, or Elements of this honest harmless Policy is to understand the Land, and the hands of the Territory to be governed, according to all their intrin- sick, and accidental differences. . . . It is no less necessary to know how many people there be of each Sex, State, Age, Religious, Trade Rank, or Degree, &c. by the knowing whereof Trade and Government may be made more certain, and Regular; for, if men know the People as afore- said, they might know the consumption they would make, so as Trade might not be hoped for where it is impossible.
(Graunt 1662: 72-73, original emphases)
Although initially independent, probability and statistics were quickly inter- twined, and in more than one way. The new order of capitalism unleashed multiple dynamics that amplified social uncertainty. Instead of the stable and
13 The social history of these related disciplines is told in Hacking (1975; 1990) and Bernstein (1996). Our account here draws partly on their works.
14 Probability theory in fact was developed a century earlier, by the Italian mathematician Girolamo Cardano. His work, however, was ahead of the times and therefore largely ignored.
Elementary particles 199
? 200 Capitalization
clear hierarchies of feudalism came a new ethic of autonomous individualism and invisible market forces. The slow cycle of agriculture gave rise to bustling industrial cities and rapidly growing populations. The relatively simple struc- tures of personal loyalty succumbed to the impersonal roller coaster of accumulation and the complex imperatives of government finances and regulations. More and more processes seemed in flux. But then, with every- thing constantly changing, how could one tell fact from fiction? What was the yardstick for truth on the path to societal happiness and personal wealth?
The very same difficulty besieged the new sciences of nature. In every field, from astronomy and physics to chemistry and biology, there was an explo- sion of measurement. But the measurements rarely turned out to be the same - so where was truth? With so many 'inaccuracies', how could one pin down the ultimate laws of nature?
The solution, in both society and science, came from marrying logical probability with empirical statistics. According to this solution, truth is hidden in the actual statistical facts, and probability theory is the special prism through which the scientist can see it. Any one measurement may be in error. But when the errors are random they tend to cancel each other out, and if we increase the size of the sample we can get as close to the truth as we wish. Moreover, and crucially for our purpose here, probability theory can also tell us how wrong our pronouncement of truth is 'likely' to be. It tosses the al-zahr - Arabic for 'dice' - to reckon the hazards.
This marriage of logic and measurement changed the concept of the unknown, making it seem less intimidating. Of course, the fear is still very much there: 'Unless you are running scared all the time, you're gone', explains the quintessential forward-looking capitalist, Bill Gates (1994). But the unknown, having been mediated through probability and statistics, has become less mysterious and, in that sense, less menacing. For the first time in history, uncertainty has been given a shape: it has a 'distribution'. Probability and statistics draw a clear relationship between the 'normal' and the 'disper- sion' around it, between what is supposedly 'natural' and 'true' and what is 'distorted' and 'devious', between the rulers at the 'centre' and the rebels and radicals at the 'margins'. They translate the unknown into seemingly precise 'standard deviations', and by so doing give human beings a comforting 'measure of their ignorance'.
The effect of this newly found confidence has been nothing short of revo- lutionary. It has opened the door to massive advances in the natural sciences. Virtually every field - from geodesy and astronomy, to classical and quantum statistical mechanics, to the biostatistics of evolution and medicine - has been rewritten by the new technique. And the same has happened in political economy. Every aspect of capitalism - from insurance, to engineering, to production, salesmanship, finance, public management, weapon develop- ment, population control, health care, mass psychology, the media and education, to name a few - has been re-articulated and further developed to leverage the power of probability and statistics. The belief that one can at
Elementary particles 201
least sketch the unknown has encouraged social imitative and intellectual creativity. The sense of knowing the 'odds' has made it much easier to dare to take a risk.
Averting risk: the Bernoullian grip
For the running-scared capitalist, though, probability and statistics are a mere starting point. They pretend to give the odds - but the odds alone are still devoid of meaning. And that is where utilitarianism comes into the picture.
The issue can be illustrated with a simple example. Suppose Bill Gates considers acquiring one of two software companies, Civilsoft and Weaponsoft. Civilsoft sells in the open market and is a bit volatile. The analysts tell Gates that, in their view, it has a 50 per cent chance of generating annual earnings of $50 million and a 50 per cent chance of generating annual earnings of $150 million. Weaponsoft is different. It sells to the military and has recently managed to secure a long term contract with the U. S. Department of Defense. According to the analysts, it is certain to generate $100 million annually. Now, probability calculations make the two firms equally attractive: mathematically, both have expected annual earnings of $100. 15 And, so, if Gates believes his analysts he should be indifferent as to which of the two he should acquire.
Not so, argued Daniel Bernoulli (1738). In his seminal paper, published more than two centuries before Gates was born, he stipulated that the measurement of risk involves more than the mere statistical odds. It requires that we put a 'moral' judgement on the expected dollars and cents - a judge- ment that he insisted must be based on diminishing marginal utility.
According to this logic, Gates, like the rest of us, should contemplate not the expected dollar earnings the companies will generate, but the expected utility he will get from consuming those earnings. This modification makes a big difference. '[A]ny increase in wealth no matter how insignificant', wrote Bernoulli, 'will always result in an increase in utility which is inversely propor- tionate to the quantity of goods already possessed' (p. 25). So the first dollar Gates earns generates more utility than the second, the second more than the third, and so on all the way to the billionth dollar and beyond.
To illustrate the consequence of this stipulation, let us split the expected earnings into $50 million chunks and assume for simplicity that with dimin- ishing marginal utility the first chunk gives Gates 3,000 utils, the second 2,000 utils and the third 1,000 utils. With this assumption, the takeover targets no longer look equally attractive: the less risky Weaponsoft is expected to generate 5,000 utils, whereas the more volatile Civilsoft is likely to give only
15 Mathematically, the expected earnings of Civilsoft are: $50 million * 0. 5 + $150 million * 0. 5 = $100 million, the same as Weaponsoft's.
? 202 Capitalization
4,500. 16 And since ultimately all Mr Gates cares about is hedonic consump- tion, it is better for him to acquire the military contractor. It is likely to give him 500 more utils per annum.
Finance theory has never managed to shake loose of the Bernoulli grip. His paper triggered a deluge of publications on risk, many of which modified and revised his original formulation. But most remain locked behind his three subjectivist tenets. First, risk ultimately is a personal matter. Second, attitude to risk is rooted in the individual's hedonic preferences. And third, because of diminishing marginal utility, most individuals tend to be risk averse. This grip keeps the risk analysis of contemporary capitalist power hostage to the eighteenth-century belief in individual utilitarianism.
The unknowable
Of course, most theorists of capitalism ignore power. So before continuing we should point out that Bernoulli's mechanical hedonism may be inappropriate for the study of risk quite apart from the absence of power. First, there is the question of the odds. Capitalists are concerned with the future, yet statistical estimates of probabilities can only be drawn from the past. This is a crucial mismatch. As David Hume's Treatise of Human Nature (1739) tells us, the mere fact that all past experiments have found water to boil at 100 degrees Celsius does not mean that the same will happen next time we put the kettle on the stove. Natural scientists have managed to assume this challenge away by stipulating the stability of natural laws (whether deterministic or stochastic), but this stipulation seems a bit stretched when applied to society.
The inherent difficulty of calculating the social odds was heightened during the first half of the twentieth century. The combined onslaught of revo- lutions, financial crises, a Great Depression and two world wars suggested that the problem was not merely one of assigning odds to possible outcomes, but of specifying what those outcomes might be in the first place.
According to Frank Knight (1921), risk calculations presuppose a known set of odds. But in society, the future contains an element of novelty, and novelty cannot be pre-assigned a probability: it is unique and therefore inher- ently uncertain. Even Keynes, whose belief in the existence of so-called objec- tive social probability survived the First World War, caved in after the Second. In matters of society, he confessed, the future is largely unknowable:
By 'uncertain' knowledge, let me explain, I do not mean merely to distin- guish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of
16 For Weaponsoft, the expected utility is the sum of 3,000 utils for the first $50 million chunk and 2,000 utils for the second. For Civilsoft, the computation is: 3,000 utils * 0. 5 + (3,000 utils + 2,000 utils + 1,000 utils) * 0. 5.
? ? Elementary particles 203
a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth-owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.
(Keynes 1937: 213-14, emphasis added)
And then there is the second problem. Even if we convince ourselves that the mathematical odds exist and that we can somehow know them, there is still the task of assigning to these odds utilitarian weights. Without these weights, there is no point talking about Bernoullian risk. Yet these weights, made up of utils, vary from person to person and from moment to moment, and this fluidity has implications. Any given asset must be seen as having not one, but many quantities of risk (as many as there are potential capitalists). Furthermore, the quantity of risk, being partly subjective, will change with preferences even if the so-called objective odds remain unaltered. This ever- shifting multiplicity makes it difficult to pin down the 'correct' risk premium and therefore to specify the 'proper' discount rate. And with this rate hanging in the air, how are capitalists to compute an asset's 'true' present value?
The capital asset pricing model
These logical challenges proved no match for the capitalist nomos. Although investors may be unable to calculate risk on their own, they can ask the know- all market to do it for them. All they need is a bureaucratic blueprint disguised as theory, and Lord Keynes was prescient enough to anticipate what it would take to produce one. His checklist was short: (1) believe that the present odds are a reliable guide to future ones; (2) assume that other investors got those odds right; and (3) conclude that their relevant computations are already reflected in asset prices (Keynes 1937: 214). The instructions were simple enough, and when a year later Paul Samuelson (1938) announced that prices reveal to us what we desire but cannot express ('revealed preferences'), the road for an operational theory of risk was finally wide open.
The glory went to Harry Markowitz and William Sharpe. Markowitz (1952; 1959) gave investors a quantitative definition of risk and told them how to 'optimize' risk and return through diversification. Sharpe (1964), building on Markowitz's insight, showed capitalists how to tease out of the market the 'true' risk premium with which to discount their assets. These contributions closed the circle. The capitalization ritual was now fully articu- lated, and the two inventors went on to collect the Sveriges Prize in Economic Sciences in Memory of Alfred Nobel.
204 Capitalization
Portfolio selection
Markowitz's manuals focused on the Bernoullian individual: the risk-averse investor. In buying and selling financial assets, he said, 'the investor does (or should) consider expected return a desirable thing and variance of return an undesirable thing' (1952: 77, original emphasis). And the best method to achieve both goals, he concluded, is to diversify.
Although Markowitz himself spoke merely of the 'variance' of returns - defined as the squared deviation of the rate of price change from its own mean value - the term was quickly adopted as a synonym for risk. This in itself was a major achievement. Until Markowitz, there was no quantitative definition for risk, let alone one that everyone agreed on. 17 So the fact that he was able to galvanize the 'investment community' around this concept - even if he never intended to - is already worth a Nobel.
But Markowitz did much more than that. By showing why risk should be handled through diversification, he provided the justification for an old practice and helped underwrite the new trend of institutional investing. To illustrate his logic, consider a portfolio comprising different financial assets. If the market prices of these assets do not move completely in tandem (so that the correlations between their rates of change are less than 1), their unique fluctuations will partly offset one another. This partial offsetting has a great benefit: it causes the price volatility of the portfolio as a whole to be smaller than the average volatility of the individual assets. By owning a portfolio of different assets, therefore, the capitalist can enjoy their average return while suffering less than their average 'risk'. Diversification, it now seemed, offered an entirely free lunch.
Which portfolio should the capitalist own? Conceptually, it is possible to plot on a two-dimensional chart the return/variance attributes of all possible portfolios. Of these endless combinations, there is a subset that Markowitz identified as 'efficient'. These are the best deals. Each efficient portfolio offers the minimum variance for a given return - or, alternatively, the maximum return for a given variance. The only way to do better on one attribute is to give up on the other, and vice versa. Conveniently, all efficient portfolios lie on a well-defined 'efficient frontier', and the Bernoullian capitalist simply needs to pick the one that equilibrates her very own greed and fear.
A few years after Markowitz made his mark, James Tobin (1958) offered an even sweeter deal. If investors are able to borrow and lend at a 'risk-free' rate of interest (such as the rate on US T-bills) they can in fact outperform the efficient frontier. All it takes is two easy steps. First, they need to single out on the efficient frontier that particular portfolio (labelled M for convenience) which, when combined with borrowing or lending, yields the highest return
17 Ricciardi (2004) managed to collate a list of no less than 150 unique risk indicators - hardly an indication of unanimity.
? Elementary particles 205
for every level of volatility. And then they make their move. Those who are more risk averse can invest part of their money in M, putting the rest of it into risk-free assets (i. e. lending it to the central bank). And those who are less risk averse can borrow at the risk-free interest rate and invest the extra cash in additional units of M.
