Distinguishing the Signal from the Noise in Investing


All the passages below are taken from James Montiier’s book, “The Little Book of Behavioral Investing,” 2010 Edition.


WHEN IT COMES TO INVESTING, WE SEEM TO BE ADDICTED TO INFORMATION. The whole investment industry is obsessed with learning more and more about less and less, until we know absolutely everything about nothing. Rarely, if ever, do we stop to consider how much information we actually need to know in order to make a decision. As Daniel J. Boorstin opined, "The greatest obstacle to discovery is not ignorance---it is the illusion of knowledge."

The idea that more information must be better seems obvious. After all, if the information is of no use, then it can simply be ignored. However, psychological studies cast doubt on the soundness of this seemingly innocuous belief.


Is More Better?

In one study,1 eight experienced bookmakers were shown a list of 88 variables found on a typical past performance chart of a racehorse (e.g., the weight to be carried, the number of races won, the performance in different conditions, and so on). Each bookmaker was then asked to rank the pieces of information by importance.

Having done this, the bookmakers were then given data for 45 past races and asked to rank the top five horses in each race.

Each bookmaker was given the past data in increments of the 5, 10, 20, and 40 variables he had selected as most important. Hence each bookmaker predicted the outcome of each race four times---once for each of the information sets. For each prediction the bookmakers were asked to give a degree of confidence ranking in their forecast.

With five pieces of information, accuracy and confidence were quite closely related. Sadly, as more and more information was made available, two things happened. First, accuracy flat-lined. The bookmakers were as accurate when they had five pieces of information as when they had 40 items to help them. So much for more information helping people make better decisions.

Secondly, the degree of confidence expressed in the forecast increased massively with information. With five pieces of information the bookmakers were around 17 percent confident; by the time they had 40 items of information, confidence had exploded up to more than 30 percent (without any improvement in accuracy, remember!). So all the extra information wasn't making the bookmakers any more accurate, but it was making them increasingly overconfident.

Another group of psychologists have recently found very similar patterns when it comes to American football.2 They tested football fans' ability to predict the outcome and point spread in 15 NCAA games. In order to take part in the study, participants had to pass a test demonstrating that they were highly knowledgeable about college football. Thus, the subjects could safely be described as experts.

The information (selected by surveying non-participating football fans) was presented in a random order over five rounds. Each round revealed six items of information in a random order. The information provided deliberately excluded team names, since these were too leading. Instead, the items covered a wide range of statistics on football such as fumbles, turnover margin, and yards gained.

To see if more information really was better information, a computer model was given the same data as the humans. In each round, the computer model was given more information, replicating the conditions the human players faced.

The results are reassuring for those who argue that more is always preferable to less. With just the first set of information (six items) the computer model was about 56 percent accurate. As more information was gradually added, the predictive accuracy rose to 71 percent. So for the computer, more information truly was better.

What about the humans? Much like the bookmakers, the football experts' accuracy didn't improve with additional information. It didn't matter whether they had 6 or 30 items of information; their accuracy was about the same. However, the participants' confidence soared as more information was added. For instance, participants started off at 69 percent confident with 6 pieces of information, and rose to nearly 80 percent confident by the time they had 30 items of information. Just as with the bookmakers, confidence but not accuracy increased with the amount of information available.


When Less Is More

Why was there such a difference between the computer and the humans? We humans have limits on our ability to process information. We simply aren't supercomputers that can carry out massive amounts of calculations in a fraction of a second. We have limited processing capacity.

More evidence of these bounds was provided by a recent study on choosing cars.3 In the study, participants were asked to choose between four different cars. They faced one of two conditions; they were either given just 4 attributes per car or 12 attributes per car. In both cases, one of the cars was noticeably better than the others, with some 75 percent of its attributes being positive. Two cars had 50 percent positive attributes and one car had only 25 percent. With only a low level of information, nearly 60 percent of subjects chose the best car. However, when faced with information overload (12 attributes to try and think about and juggle), only around 20 percent of subjects chose the best car.

A similar outcome was found among financial analysts.4The task they were given was to forecast fourth-quarter earnings in 45 cases. In fact, there were 15 firms, but each firm was presented in one of three different information formats. The information formats were:


1. Baseline data consisting of the past three quarters of EPS, net sales, and stock price.

2. Baseline data plus redundant or irrelevant information, that is, information that was already embodied in the baseline data such as the PE ratio.

3. Baseline data plus nonredundant information that should have improved forecasting ability, such as the fact that the dividend was increased.


None of the participants realized that they had all seen the same company in three different formats, perhaps because each presentation was separated by at least seven other profiles.

Each respondent was not only asked for his or her forecast but also for their confidence in their forecast. Both redundant and nonredundant information significantly increased the forecast error. However, guess what? The self-reported confidence ratings for each of the forecasts increased massively with the amount of information that was available.


From the Emergency Room to the Markets

Given my earlier comments on the problems of confidence in the medical profession, you might be inclined to say that there is little that investors could possibly learn from doctors. However, you would be wrong, and here is why.

Our tale starts in a hospital in Michigan. Physicians at this particular hospital tended to send about 90 percent of all patients with severe chest pains to the cardiac care unit. The unit was becoming seriously overcrowded, care standards were dropping, and costs were rising.

The decision to send so many patients to the ICU reflected concerns among doctors over the costs of a false negative (i.e., not admitting someone who should have been admitted). Fine, you might say, rather an overcrowded hospital than the alternative. But this ignores the risks inherent in entering the ICU. About 20 thousand Americans die every year from a hospital-transmitted illness. The risks of contracting such a disease are markedly higher in an ICU than in a conventional ward.

The most damning problem for the Michigan hospital doctors was that they sent about 90 percent of those who needed to be admitted and also 90 percent of those who didn't need to be admitted to an ICU. They did no better than chance!

Such a performance begs the question of why doctors found it so difficult to separate those who needed specialist care from those that didn't. Luckily, this question has been examined.5

The researchers studying this problem uncovered a startling fact---the doctors were looking at the wrong factors. They tended to overweigh risk factors such as a family history of premature coronary artery disease, age, male gender, smoking, diabetes mellitus, increased serum cholesterol, and hypertension.

While looking at these risk factors helps assess the overall likelihood of someone having cardiac ischemia, they have little diagnostic power. They are what might be called pseudo-diagnostic items. Far better diagnostic cues are available. Research has revealed that the nature and location of patients' symptoms, their history of ischemic disease, and certain specific electrocardiographic findings are by far the most powerful predictors of acute ischemia, infarction, and mortality. It is these factors that the doctors should be looking at, rather than the risk factors that they were actually paying attention to.

Could anything be done to help doctors look at the right things? The researchers came up with the idea of using laminated cards with various probabilities marked against diagnostic information. The doctors could then follow these tables and multiply the probabilities according to the symptoms and test findings in order to estimate the overall likelihood of a problem. If this was above a set threshold, then the patient was to be admitted to cardiac ICU; otherwise a normal bed with a monitor would suffice.

After this aid to decision was introduced there was a marked improvement in the decision-making of the doctors. They still caught a high proportion of problem cases, but they cut down dramatically on sending patients to ICU who didn't need to go.

Of course, this might indicate the aid had worked. But, being good and conscientious scientists, Green and colleagues decided they had better check to ensure that this was the case. This was done by giving the doctors the decision tool in some weeks and not giving it to them in other weeks. Obviously, if the tool was the source of the improved performance, one would expect some deterioration in performance in the weeks when access to the aid was prohibited.

The results from this experiment showed something surprising. Decision-making seemed to have improved regardless of the use of the tool. What could account for this surprising finding? Was it possible that doctors had memorized the probabilities from the cards, and were using them even when the cards weren't available? This seemed unlikely, since the various combinations and permutations listed on the card were not easy to recall. In fact, the doctors had managed to assimilate the correct cues. That is to say, by showing them the correct items to use for diagnosis, the doctors' emphasis switched from pseudodiagnostic information to truly informative elements. They started looking at the right things!

Based on this experience, the researchers designed a very easy-to-use decision aid---a series of yes/no questions. If the patient displayed a particular electrocardiogram anomaly (the ST change), they were admitted to ICU straight away. If not, a second cue was considered: whether the patient was suffering chest pains. If they were, they were again admitted to ICU, and so forth. This aid made the critical elements of the decision transparent and salient to the doctor.

These simple checklists also worked exceptionally well in practice. The simple yes/no questions actually improved both the number of patients correctly sent to ICU (95 percent) and reduced the number incorrectly sent to ICU down to 50 percent. That was even better than the complex statistical model.

The power of simple checklists should not be underestimated. One very recent study6 examined how the implementation of a simple 19-point surgical checklist might help save lives. The checklist covered things as simple as making sure someone had checked that this was the same patient everyone thought it was, that the nurses had reconfirmed the sterility of all the instruments, and that at the end of surgery, someone counted all the instruments and made sure the number was the same as at the start of the operation.

These might sound like the kinds of things you hope would occur anyway. However, having the checklist forced people to go through the steps. The results of this checklist implementation were astounding. Prior to the introduction of the checklist the patient death rate was 1.8 percent and the complication post surgery rate was 11 percent. After the checklist was introduced the death rate dropped to 0.8 percent and the complication rate collapsed to 7 percent.

But, enough about doctors and their patients---what can investors learn from all of this? Simple: It is far better to focus on what really matters, rather than succumbing to the siren call of Wall Street's many noise peddlers. We would be far better off analyzing the five things we really need to know about an investment, rather than trying to know absolutely everything about everything concerned with the investment.

Jean-Marie Eveillard of First Eagle affirms my contention by saying, "It's very common to drown in the details or be attracted to complexity, but what's most important to me is to know what three, four, or five major characteristics of the business really matter. I see my job primarily as asking the right questions and focusing the analysis in order to make a decision."

Another great investor who knows how to distinguish the signal from the noise is, of course, Warren Buffett. You'll never hear him discuss the earnings outlook over the next quarter or consult an overabundance of information when making an investment. Instead he says "Our method is very simple. We just try to buy businesses with good-to-superb underlying economics run by honest and able people and buy them at sensible prices. That's all I'm trying to do."

There is no one right approach to investing, nor are there three, four, or five factors I can tell you to always evaluate when making an investment. It all depends on your investment approach. I'm a deep value investor, so my favorite focus points might not be to your tastes. I essentially focus upon three elements:


1. Valuation: Is this stock seriously undervalued?

2. Balance sheets: Is this stock going bust?

3. Capital discipline: What is the management doing with the cash I'm giving them?


These points may or may not be useful to you in your approach to investing, but the important take-away here is that you should determine the factors you will use to assess your investment choices, and then you will focus on your own analysis of each of these elements.   [pg 73-85]



1. P. Slovic, "Behavioral Problems Adhering to a Decision Policy" (Unpublished paper, 1973).

2. C. Tsai, J. Kalyman, and R. Hastie, "Effects of Amount of Information m Judgment Accuracy and Confidence" (Working paper, 2007).

3. A. Dijksterhuis, M. Bos, L. Nordgren, and R. Van Baaren, "On Making the Right Choice: The Deliberation Without Attention Effect," Science 311 (2007): 1005-1007.

4. ED. Davis, G.L. Lohse, and J.E. Kotteman, "Harmful Effects of Seemingly Helpful Information on Forecasts of Stock Earnings," Journal of Economic Psychology 15 (1994): 253-267.

5. L. Green and J. Yates, "Influence of Pseudodiagnostic Information on the Evaluation of Ischemic Heart Disease," Annals of Emergency Medicine 25 (1995): 451-457.

6. A.B. Haynes et al., "A Surgical Safety Checklist to Reduce Morbidity and Mortality in a Global Population," New England Journal of Medicine 360 (2009): 491-499.