HOW TO BECOME SMARTER






Charles Spender




FOURTH EDITION (REVISED AND UPDATED)
July 2015

Table of Contents

How to read this book

Notes about the 4th Kindle edition

Cautionary statement

Twelve things this book can help you achieve

CHAPTER 1: Mental clarity

CHAPTER 2: Sleep management

CHAPTER 3: Attention control, or the ability to concentrate

CHAPTER 4: Emotional intelligence

CHAPTER 5: Reading and writing performance

CHAPTER 6: Social intelligence

CHAPTER 7: Six things you shouldn’t do with your money

Appendices

Endnotes

Cited literature

List of tables

About the author

How to read this book

A shorter, less technical version of this book, “Become Smarter,” is available. Please do not read the present ebook straight through from start to finish. It’s a good idea to first read all key points and the summaries of all chapters. To skip through book sections, you can use the left and right positions of the 5-way button of your Kindle. After that, you can read chapters that you find interesting and you don’t have to read the whole book from cover to cover. There are a lot of links in this text (to endnotes, literary references, and cross-references) and you can ignore all links if you find them distracting.

Notes about the 4th Kindle edition

This Kindle edition is a revised version of the paperback edition of the book. The second edition was a result of extensive editing. It was much easier to read than the first Kindle edition that was available prior to January 15, 2012. The fourth edition contains updated scientific evidence and some revisions. This edition contains any necessary cross-references and literary references as clickable links. Thus, looking up a source behind a statement is a matter of a couple of clicks.

Cautionary statement

Many claims in this book are based on the author’s personal experience (a healthy subject). About half of the proposed methods are supported by scientific studies. The advice offered is not meant to replace the recommendations and advice of a physician. Nursing mothers, pregnant women, and people who are taking medication or have a chronic medical condition should consult their physician or a qualified health care professional before trying any of the lifestyle changes described in this book.

Twelve things this book can help you achieve

   

  1. Increase your score on general aptitude or intelligence tests.
  2. Understand and learn complex reading material that is uninteresting to you (but necessary for your job or school).
  3. Concentrate on job- or school-related reading and writing tasks for hours at a time.
  4. Reduce procrastination and overcome writer’s block.
  5. Experience euphoria without drugs and come up with new ideas, when necessary.
  6. Cope with extended periods of solitude, such as those related to academic studies or big writing projects.
  7. Prevent yourself from making rash, impulsive decisions.
  8. Prevent fits of anger and reduce feelings of hostility.
  9. Sharpen your wit, become more talkative, and entertain people.
  10. Depending on circumstances, use different regimens that improve one or another mental function.
  11. Get along with people and reduce the amount of arguments and conflicts.
  12. Self-manage a severe mental illness such as schizophrenia without drugs.

BONUS: A sustainable weight loss regimen (Appendix VIII.b).

CHAPTER 1: Mental clarity

picture

Contents:

Biological components and knowledge components of intelligence

Interpreting evidence from studies on human subjects

Can mental and physical exercise make you smarter?

The natural nutrition theory of intelligence and its limitations

Artificial ingredients in the diet and their effects on mental performance

Chemicals formed during the cooking of food and their effects on the brain

Several safe diets that can improve mental abilities

Saturated fat: friend or foe?

A diet that can worsen mental abilities quickly

The “no-diet approach,” or food restriction without adhering to any strict diet

Potential adverse effects

Summary of Chapter One

Biological components and knowledge components of intelligence

Before we begin, just a reminder: you don’t need to read this book straight through from start to finish. It is best to first read all key points and the summaries of all chapters. To skip through book sections, you can use the left and right positions of the 5-way button of your Kindle. After that, you can read chapters that caught your interest and you don’t have to read the whole book.

It is prudent at the outset to define a few basic concepts, some of which might be familiar. Different sources, however, may offer different definitions, and defining these concepts now will avoid ambiguity later. Many definitions used throughout this book are from a recent review article by John Mayer and colleagues [23]. A mental ability is “a person’s capacity to perform a psychological task, such as solving a problem, so as to meet a specified criterion such as correctness, novelty, or speed” [23]. This book discusses such mental abilities as attention control, impulse control, and information processing speed, among others. Usually researchers measure many different mental abilities collectively in order to determine a person’s intelligence. Therefore, we can define intelligence as “a set of mental abilities that permit the recognition, learning, memory for, and capacity to reason about a particular form of information, such as verbal information.” John Mayer and colleagues define intelligence as “a mental ability or a set of mental abilities…” In this book, however, the word “intelligence” always means a set of mental abilities.

For most lay readers the word “intelligence” is associated with the intelligence quotient (IQ), widely used as a measure of mental abilities for clinical, and sometimes, occupational purposes. The type of intelligence measured by IQ is called “academic intelligence” in psychological literature in order to distinguish it from other types of intelligence, such as emotional intelligence and social intelligence. Measurements of academic intelligence include assessment of the ability to process and manipulate verbal information (words) in one’s mind and the ability to process numerical information and carry out calculations. Academic intelligence also includes the ability to comprehend information about spatial organization and geometrical structure of objects. Scientists use the scores obtained by measuring the relevant mental abilities on intelligence tests to calculate a single value, g, or general intelligence. This measure of academic intelligence is not constant for any given person. It can change throughout a person’s lifespan. On average, general intelligence increases with age until the late 30s and then declines slowly. Due to the well-established age-related changes in general intelligence (g), calculation of the final IQ score includes adjusting g for the person’s age.

Most IQ tests produce a numerical value around 100, which is the average intelligence quotient of the human population. Values of IQ exceeding 100 mean higher than average intelligence. For example, only about 0.1% of the population has an IQ over 149. Conversely, IQ scores below 100 mean lower than average intelligence and a score lower than 70 suggests mental retardation. To sum up, IQ is the age-adjusted general intelligence factor (g) calculated by measuring various mental abilities related to the processing of verbal, numerical, and geometrical information. All of this is within the realm of academic intelligence, often called “intelligence.”

Research about intelligence is not without controversy. For example, there are opposing views on the validity of using a single factor (general intelligence, g) to measure a collection of different mental abilities. The research on differences in intelligence among different groups of population is another controversial area and is outside the scope of this book. Measurements of IQ can vary for the same person because of factors like fatigue, general state of health, and repeat testing. Repeated taking of IQ tests can contribute to learning certain test-taking skills, resulting in an IQ score several points higher than the first-time IQ score [499]. These examples point to potential difficulties in interpreting IQ scores.

As to the practical usefulness of IQ testing, studies have identified correlations of IQ scores with academic performance and with job performance in a variety of occupations. A high score can predict better job performance [955, 956]. There is also some correlation of IQ with social status and income level, but this correlation is in the weak-to-moderate range. Equally or perhaps more important in attaining social status and high income are factors such as personality traits, social status of parents, and luck. Moderately high IQ scores correlate (moderately) with a higher social status and income. Yet statistical studies show that huge IQ scores provide little further benefit for social status and income (there is no correlation) [24, 25]. Self-employment does not correlate with intelligence either [516].

Research in the last several decades has identified types of intelligence other than academic. These deal with mental abilities different from those measured by academic intelligence (IQ) tests. Emotional intelligence deals with mental abilities related to identification and processing of emotions in the self and other people. Social intelligence measures abilities related to intelligent behavior in social situations. Studies show that academic intelligence exhibits little if any correlation with emotional intelligence. These two concepts deal with independent and unrelated sets of mental abilities. We will discuss emotional and social intelligence in Chapters Four and Six.

Two other dimensions of intelligence are important here. “Crystallized intelligence” deals with acquired knowledge and skills. “Fluid intelligence” relates to how well the human brain works, regardless of knowledge and skills. “Crystallized intelligence” measures such abilities as vocabulary, general knowledge, and the like, while “fluid intelligence” assesses the ability to understand and solve novel problems. The formal definition of fluid intelligence is “on-the-spot reasoning and novel problem-solving ability” [26]. For example, suppose two people have roughly the same amount of knowledge, but one of them can better understand complex problems. The latter person will attain a higher score on an intelligence test. In simple terms, fluid intelligence assesses how well the brain works; that is, it assesses the biological properties of the brain. It does not measure information processing speed and short-term memory, although all three measurements indicate how well the brain works and studies show that all three correlate [27]. Some parts of IQ tests mostly assess crystallized intelligence, such as questions intended to measure vocabulary or general knowledge. Other parts of IQ tests mostly assess fluid intelligence, such as questions that require complex calculations. Still other test items may measure both fluid and crystallized intelligence to the same extent. It is possible to assess fluid and crystallized dimensions of intelligence separately, in addition to the final IQ score.

All three types of intelligence—academic, emotional, and social—have both crystallized and fluid dimensions. For example, recent studies of social intelligence identified two components: crystallized social intelligence and fluid social intelligence [28]. The former deals with acquired social skills and knowledge, whereas the latter deals with the ability to understand and predict social situations and to solve problems related to social situations. Thus there may be six major subtypes of intelligence: crystallized academic, fluid academic, crystallized emotional, fluid emotional, crystallized social, and fluid social intelligence. Some studies suggest that different areas of the brain are responsible for different types of intelligence. One brain region may be primarily responsible for academic intelligence and another area for carrying out mental processes related to emotional intelligence. It is unclear if the same is true for the crystallized and fluid dimensions of each type of intelligence. Further research is needed.

For the purpose of explaining the meaning of techniques proposed in this book, it will be convenient to subdivide the six subtypes of intelligence into “knowledge components” and “biological components.” The knowledge components are crystallized academic, crystallized emotional, and crystallized social intelligence. The biological components are fluid social, fluid academic, and fluid emotional intelligence. The biological components are the main focus of this book because they deal with how well the brain functions and they are largely independent of knowledge and acquired skills. Put another way, this book focuses on improving the functioning of the human brain, and one can assess this improvement by measuring fluid intelligence. Crystallized intelligence (knowledge and skills) will not change, at least in the short term, with approaches aimed at improving brain function.

This book defines the sum of all three biological components (fluid social, fluid academic, and fluid emotional intelligence) as “mental clarity.” This is a measure of how well the brain functions with respect to all kinds of mental tasks: those related to emotions and social situations as well as academic tasks and problems. Scientific validity of the concept of mental clarity is unknown. It is unclear whether this is a single factor that correlates with its three component parts, or whether mental clarity is just a sum of three independent, unrelated factors. Nevertheless, the concept of mental clarity will be useful in this book. This concept attempts to separate the biological components of intelligence from the knowledge components. Measurements of mental clarity will assess how well the brain is functioning in general.

Ideally, to measure mental clarity we should use well-established tests of academic, emotional, and social intelligence [2931]. We need to calculate fluid dimensions from each result and add up the three values using some mathematical formula. This approach will be accurate and scientifically valid. Yet the existing tests are expensive, require qualified personnel to administer them, and may not be available in all languages. At present, validated tests of emotional intelligence cannot assess its fluid component. I designed a brief self-rating questionnaire (Appendix IV) that attempts to assess fluid components of academic, emotional, and social intelligence. One of the drawbacks of this instrument is that self-rating questionnaires do not represent an accurate assessment of mental abilities [23]. This is because self-rating often reflects a person’s perception and doesn’t measure mental abilities as do proctored tests. For this reason, the mental clarity questionnaire is not an accurate measure of intelligence. Nevertheless, it does avoid direct questions about mental abilities and asks only questions that can assess such abilities indirectly, and thus more objectively. For example, many respondents will answer affirmatively the question “Do you think that you are very smart?” even those who don’t have above-average IQ scores. On the other hand, the question “Is your life easy?” will receive a more objective answer and will paint a more accurate picture of the respondent’s intelligence. Highly intelligent people usually have no difficulty solving life’s problems. The “ease of life” is not a perfect measure of intelligence and there are many exceptions; that is why the mental clarity questionnaire contains twenty questions. Another potential problem with self-rating questionnaires is the honesty of responses, especially when the outcome of the testing has real-life consequences, such as a promotion or being hired for a job. For us this problem is a minor one because the purpose of testing is to see whether the self-help approaches described in this book are effective or not. The book’s readers have little or no incentive to be dishonest with themselves. This type of assessment of mental abilities is not perfect, but the advantages are ease of use and low cost. Nevertheless, the most accurate way to assess the usefulness of my advice is to take a proctored IQ test (not an internet IQ test) before and after one of the proposed lifestyle changes. You can then calculate the fluid component of academic intelligence and discover any improvement. If you are a student, you can assess improvement in your mental abilities, if any, by the change in your grade point average after you try some of the proposed techniques.

There is a possibility that your mental clarity score will be low, according to the proposed questionnaire (Appendix IV). This does not mean that you must drop everything and do your best to try to improve your score. Your low score can mean that your mental abilities are fine and the questionnaire is imperfect. So far, nobody has validated this questionnaire scientifically and you are under no obligation to do anything to improve your score.

Key points:

Interpreting evidence from studies on human subjects

Any discussion of experimental evidence that supports this book’s theories would be difficult without defining some basic terminology and explaining how to determine the strength of evidence produced by scientific experiments. This section explains the meaning of randomization, blinding, statistical significance, clinical change, and some other relevant concepts that researchers use in scientific experiments, such as clinical trials. (Readers can skip the detailed discussion of this topic and jump to the key points: press the skip button or this link.) Most often, a clinical trial consists of two groups of patients: an experimental group (who receive active treatment) and a so-called control group. The control group receives a placebo, no treatment, or a specially designed “control treatment.” Researchers determine the effectiveness of a drug or another type of treatment by comparing symptoms in the experimental group to those in the control group(s). Studies unrelated to medical treatments that scientists perform on healthy human subjects are sometimes called clinical trials, but the more correct term is “a volunteer study.” Both clinical trials and volunteer studies require the informed consent of test subjects. The study protocol must receive approval of an ethics committee at the research institution where the study is taking place.

The word “randomized” in the phrase “randomized controlled trial” means that researchers have assigned the test subjects to either the experimental or control group randomly, using a special “randomization procedure.” This is in contrast to a situation where an investigator assigns test subjects to the groups however he or she wants. For example, he may distribute them in a way that will produce a desirable outcome. Random and unbiased distribution of test subjects among the control and experimental groups ensures that there is no influence from irrelevant variables (such as age, sex, or personality traits) on the results. In other words, randomization ensures that the study’s outcome depends only on the variables being tested in the experiment. For instance, a study is going to investigate the effect of food additives on attention function. In a nonrandomized controlled trial an investigator may inadvertently assign the most inattentive test subjects to the experimental group, who will receive food additives. This causes the test subjects with the best attention function to end up in the control group, who will receive a diet free of food additives. Without a randomization procedure, preexisting differences of attention function between the two groups will influence the results of this study. If this nonrandomized study finds that food additives in the diet worsen attention function, the validity of this finding will be questionable. If, however, we assign the test subjects to the groups randomly, then roughly equal numbers of inattentive and attentive subjects will be present in each group (experiment and control). In this case the results of the study will be more dependent on the presence or absence of food additives in the diet (if such a relationship exists) and less dependent on irrelevant variables. Randomized controlled trials provide stronger evidence of the effectiveness of a drug (or other medical treatment) than clinical trials that are not randomized.

Another way of ensuring the integrity of clinical studies is to blind the test subjects to the type of treatment they receive: placebo or active treatment. This is called a blinded clinical trial. A placebo is a treatment that produces no biological effect on the medical condition in question. For example, a sugar pill can serve as a placebo in a clinical trial of an antianxiety drug. Blinding means that the test subjects will not know whether they are in the experimental or in the control group. Often, clinical studies employ an additional level of blinding: blinding the investigators who conduct measurements to the type of treatment test subjects receive. This is a double-blind trial. Blinding test subjects allows investigators to assess the placebo effect of a treatment. For example, if 30% of patients in the placebo group show clinical improvement, this means that self-suggestion without any real treatment can improve symptoms of 30% of patients with the disorder in question. Alternatively, this means that 30% of patients with the disorder will improve spontaneously without any intervention, either placebo or treatment, as a result of the natural progression of the disease [32, 33, 904]. If only 30% of patients who receive the active drug show clinical improvement, then the drug in question is no better than a placebo. The drug has no biological effect on patients. If this study were not blinded, then an investigator could incorrectly interpret the 30% response to the drug as a beneficial effect of the experimental drug. Blinding of investigators ensures that investigators’ biases do not influence the results. Suppose an investigator is interested in demonstrating a beneficial effect of an experimental drug (to renew his research grants or get a job promotion). He may inadvertently record results or conduct measurements in such a way as to produce the desired outcome. This is the “observer bias.” On the other hand, if the investigator does not know whether the patients receive a placebo or active treatment, the observer bias will be minimal. At the end of the double-blind trial the investigators will decipher the status of the patients (placebo or active treatment) and will be able to see unbiased results. The results of double-blind randomized controlled trials primarily reflect the effects of the treatment in question. These results are virtually unaffected by biases and irrelevant variables. So this sort of study is much more trustworthy than clinical trials that are not randomized and not double-blinded. Most scientists consider double-blind randomized controlled trials “the golden standard” of clinical trials.

Note that a placebo control can be problematic if the disease in question causes significant suffering to patients. If an effective treatment exists, it is unethical to leave this medical condition untreated. It is also impossible to design a placebo control group for diet studies. A diet change always has some biological effects and therefore cannot serve as a placebo control. A control diet must be either a specially designed diet or no diet, in which case the participants follow their customary nutritional regimen. The former is not a placebo diet because it is different from the customary diet of a test subject and will produce some biological effects from the change of nutrition. The latter (no diet) is not a placebo diet because the test subjects are complying with no particular diet (receive no treatment) and know about it. A placebo is a treatment that produces no biological effect on the medical condition under study. Thus, the participants who receive no treatment are not a placebo control group: they are a “no-treatment control” group. Despite the absence of a placebo diet, it is still possible to organize a blinded trial of a diet. Scientists can design a control diet and not tell test subjects which of the two diets will produce beneficial effects. These two diets (experimental and control) should be similarly difficult to adhere to. Results of a blinded trial will be more trustworthy than those of an open trial, where participants know whether they are in the experimental or control group. A double-blind trial of a diet is also possible if the investigators who assess clinical change are blinded to the type of diet the patients consume. Researchers can avoid most of the complexities associated with control diets. They can use an open trial and compare the experimental diet to a drug known to be better than a placebo.

Two definitions would be useful here. A response rate of a medical treatment is the percentage of patients who show improvement after receiving this treatment. For instance, if a drug reduces symptoms of a disease in 70% of patients, this means that this drug has a response rate of 70%. A placebo rate of a disease is the percentage of patients in the placebo group who show improvement during a clinical trial.

Even if a placebo control is unfeasible, it is often possible to tell whether the observed benefits of a treatment are due to the placebo effect. A scientist can do this by comparing the response rate to the placebo rate from previous studies with the same disease. In clinical trials with patients with anxiety disorders, placebo rates can be as high as 50%. Therefore, antianxiety treatments that produce improvement in 50% of patients are likely to be no better than a placebo. Some studies have questioned the validity of the placebo effect itself. They show that much of the effect is the natural course of the disease, or spontaneous improvement without any intervention [32, 33, 904, 931]. Comparison of placebo control to no-treatment control in anxiety shows that the difference is insignificant. In patients with anxiety (or depression) an authentic placebo effect is either nonexistent or miniscule, being a result of biases and imperfections of measurement [931]. These data suggest that what people commonly mistake for a placebo effect in patients with anxiety or depression is the irregular course of these diseases. The latter involves numerous and spontaneous improvements and relapses.

Statistical significance is the third important factor to consider when evaluating strength of evidence. In lay language, the meaning of the term statistically significant is “a result that could not occur by chance” or “not a fluke.” Conversely, statistically insignificant means “likely to be a fluke” or “the result may have occurred by chance.” If a clinical trial shows that the effect of some treatment is statistically insignificant, the evidence of its effectiveness is weak. Statistical significance is not a complicated concept and an example will illustrate how it works. Suppose we have two groups of test subjects and each consists of 10 depressed patients. One group will serve as control and the other as an experimental group in a clinical trial of a novel antidepressant drug. Psychiatrists measure severity of depression using special questionnaires that allow for expressing symptom severity as a numeric value. Let’s assume that the rating of symptoms for most patients in the two groups is a number between 10 (mild depression) and 30 (severe depression). We will also assume that this number is different for each patient in a group. In order to describe how widely the symptoms vary among the patients, we will use a measure called standard deviation. The greater the standard deviation, the wider the range of symptoms in a group. On the other hand, if standard deviation were zero, there would be no variability and all patients would have the same value of the rating of symptoms. Without going into detail about the calculation of standard deviation, let us assume that the variability of symptoms (standard deviation) equals five in each group of patients. Another useful measure that we will need is the average rating of symptoms of depression in each group of patients. This average (also known as “the mean”) equals the sum of the ratings of every patient divided by the number of patients in a group. Let’s say the average rating of symptoms is 20 in each group, which means “moderate depression.” A good rule of thumb regarding standard deviation and the average value is the following. Ninety five percent of items in a group are usually within two standard deviations (5 x 2) of the average value (20). In other words, let’s say the average value (the mean) is 20 and the standard deviation is 5. This means that 95% of people in this group will have symptoms between 10 and 30 and the remaining 5% of the group are outside this range.

After the clinical trial finishes, we find that treatment with the novel antidepressant drug lowered the average rating of symptoms from 20 to 18 in the experimental group (10 patients). The average rating stayed the same (20) in the control group (another 10 patients), who received a placebo pill. For simplicity’s sake, we will assume that the variability of symptoms among patients stayed the same in both groups after treatment (standard deviation is still 5). We now have everything we need for assessing the statistical significance of the observed beneficial effect of the antidepressant drug. We know the number of patients in each group (10) and the average rating of symptoms after treatment in the control (20) and the experimental group (18). We also know the standard deviation of symptoms in each group after treatment (both are 5). You don’t really need to know the complicated formula used for calculating statistical significance. You can tell if the effect of treatment is statistically significant by eyeballing the above numbers. (Curious readers can find free calculators on the Internet that will determine statistical significance of a study based on the above variables; look for a t-test for two samples.) The beneficial effect of the drug is not statistically significant in our example (the calculator will produce p value greater than 0.05). This means that the change of symptoms may have occurred by chance. In other words, the evidence of effectiveness of this novel antidepressant drug is weak.

There are two simple reasons for this lack of statistical significance. One is the small size of the effect of treatment compared to the variability of depressive symptoms among the patients. The average rating of symptoms differs by a measly 2 between the treatment group and the placebo group (18 versus 20), whereas variability of symptoms (standard deviation) is a whopping 5 in both groups. The clinical change produced by the drug is less than one half of the standard deviation, which makes this result unimpressive. The other reason for the lack of statistical significance is the small number of participants in the study. We have fewer than 20 subjects in each group, which is a small number. If, however, exactly the same results transpired with 100 patients in each group, the result would be statistically significant (p value is less than 0.05). In a study that includes a large number of test subjects, random factors are less likely to influence the results. All else being equal, the results will be more statistically significant. Results of a clinical trial are likely to be statistically insignificant when two conditions are true:

The effect of treatment will be statistically significant if the study includes a large number of test subjects and the effect of treatment is greater than a half of the standard deviation of symptoms. If a result is not statistically significant, this does not necessarily mean that it’s a fluke. It can be a valid result, but there is uncertainty as to whether it’s a fluke or not. A more rigorous (statistically significant) study is necessary to either confirm or refute the validity of the result.

The fourth factor to consider when interpreting evidence from a study on human subjects is the size of the effect of a treatment. While statistical significance helps us to answer the question “How likely is it that the change observed in patients after treatment occurred by chance?” it does not address the question “How much did the treatment improve the symptoms in the patients?” A result can be statistically significant, but the actual change of symptoms can be tiny, to the point of being imperceptible to the patients. For a treatment to be effective it should produce a change in symptoms that will be noticeable to the patients. A measure known as “clinical change” is useful in this regard. In the example about the novel antidepressant, the drug lowered the average rating of symptoms from 20 (moderate depression) to 18 (also around moderate depression). Let’s say that we repeated the clinical trial of this drug with several hundred patients each in the control and in the experimental group. The results of the trial turned out the same. Now the drop of 2 points in the average symptoms in the experimental group (from 20 to 18) is statistically significant, but it is not clinically significant. Both ratings, 18 and 20, are in the range of moderate depression and the change will go unnoticed by most patients. If, on the other hand, the treatment had resulted in the average rating of 10 (mild depression), then this effect would have been clinically significant. This means that the treatment would have produced a change noticeable to patients. A useful measure of clinical significance is clinical change. This measure shows how far a given treatment moved the patient along the path from disease to health. Scientists express this measure in percentage points. One hundred percent clinical change means full remission (disappearance of all signs and symptoms of a given disorder) and 0% clinical change means no change in symptoms.

Let’s say we decided to test an older, widely prescribed antidepressant. When tested under the same conditions as above, treatment with the older drug resulted in an average rating of depressive symptoms that equals 10 (mild depression) in the experimental group. In the placebo group, the rating of symptoms equals 20 (moderate depression, unchanged), and the results of the trial are statistically significant. Let’s also assume that the rating of 5 corresponds to mental health, or the absence of depressive symptoms. There are 15 points between the rating of 20 (moderate depression) and 5 (health). The average rating of 10 (mild depression) in the experimental group means that the older antidepressant moved the patients 10 points closer to health (20 minus 10). We will assume that the maximum possible clinical change is 100%, which is equivalent to a change of 15 points from 20 (moderate depression) to 5 (health). We can calculate the clinical change after treatment with the older antidepressant as 10 divided by 15 and multiplied by 100%, which equals 67%. This is a big clinical change. In comparison, the clinical change produced by the novel antidepressant that we talked about earlier is 2 divided by 15 and multiplied by 100%, which equals 13%. This is a tiny clinical change, which will be almost unnoticeable to the patients. Thus, the evidence of effectiveness is strong for the older antidepressant and weak for the newer one.

What about healthy human subjects? How do scientists measure the size of the effect of treatment in studies on healthy volunteers? Usually they use standard deviation. Remember that standard deviation describes how widely a measure (such as IQ) varies within a group of test subjects. Let’s say that the average IQ in a group of 200 volunteers is 100 and the standard deviation is 15 points. According to the formula of standard deviation, this usually means that 95% of people in this group have an IQ within 2 standard deviations (15 x 2) of the mean or average value (100). Put another way, if the average IQ in a group is 100 and standard deviation is 15, then 95% of people in the group have an IQ between 70 and 130. The remaining 5% of the group are outside this range. If some hypothetical treatment can increase the average IQ by 15 points in this group of 200 volunteers, this is a large effect. A change that equals the value of one standard deviation (even 80% of standard deviation) or greater is considered large. A change by one half of a standard deviation is a moderate size of the effect. One-fourth of standard deviation or less corresponds to a small effect size. In summary, we can express the size of the effect of treatment using either standard deviation or clinical change. A small effect size means that evidence of the effectiveness of the treatment in question is weak. The evidence is weak even if the results of the study are statistically significant.

The fifth factor useful in assessing strength of evidence is publication of the results of the experiment. If researchers publish the results in a peer-reviewed scientific journal, these results are likely to be trustworthy (you can find a good database of scientific biomedical literature at www.PubMed.gov). Each article submitted to a scientific journal undergoes a thorough review by one or more independent experts in the field. Based on this expert opinion, the journal reaches a decision about either publication or rejection of the manuscript. This process is called peer review. The reviewers find fault with every word in the scientific article. Publication of fraudulent results, if an investigation proves this to be the case, has serious consequences for the authors. The journal will report the fraudulent activities to the authors’ employer and research funding agencies. This may result in the authors’ dismissal, demotion, or loss of research funding. The journal retracts the article containing fraud and has its record deleted from databases of scientific literature (indexing services such as PubMed).

Research results presented in media or publications other than scientific journals are less trustworthy. When researchers present unpublished results at a scientific conference, you have to be skeptical. If a scientist reports biomedical findings in a book or on a personal or commercial website, the data are less trustworthy still. When people publish research data in a book or on a website, nobody performs critical review of the data. Sometimes a publishing house does some review but it is always superficial, and there are no penalties for fraud. There are rare exceptions, such as academic textbooks: numerous experts review them thoroughly and these books do not contain any original unpublished research. Most academic textbooks are a trustworthy source of information. The problem with textbooks is that they become outdated quickly, especially in the biological and health sciences. This is due to the lengthy writing, review, and publication process associated with academic textbooks. Other trustworthy sources of information are health-related government websites that are peer-reviewed (they say so at the bottom of the page).

It would be fair to say that Wikipedia is a peer-reviewed source of information, although many of the reviewers are not experts. These people can feel strongly about an issue and are willing to devote the time to making their opinion count in an article. In my view, Wikipedia is an excellent source of basic information on topics that are not controversial. The main advantages of Wikipedia over review articles published in scientific journals are the following: {a} Wikipedia articles are available free of charge (there are some freely available scientific articles, too); {b} they usually cover all possible aspects of a topic; and {c} they are written in lay language. The main disadvantages of Wikipedia compared to review articles published in scientific journals are these: {1} Wikipedia articles occasionally contain inaccuracies and {2} quality of sources is sometimes poor (an article may cite some key facts from a book, website, or another non–peer-reviewed source). In my experience, most reviewers of scientific articles do not allow authors to cite Wikipedia as a source of information. Authors can cite only studies published in peer-reviewed journals for a scientific article’s essential information (definitions, argumentation in favor of a theory, and rationale for a study). This raises another important issue. If an author reports research data in a book and a whole study is based on information from other books and similar non–peer-reviewed sources, you can disregard this information as untrustworthy. Citing peer-reviewed sources is one of the main criteria for considering scientific information reliable.

The sixth factor is the track record of the author or authors of the experiment. You need to be skeptical of authors with a history of fraud (accusations or conviction). Authors who have no scientific publications and do not have academic degrees are unlikely to produce valid research data. Authors who are on the blacklist of the website QuackWatch.org are usually in the category of unscrupulous peddlers of dangerous or untrustworthy information or of useless health-related products. QuackWatch is a website that contains information on quackery and health fraud and helps consumers make intelligent decisions regarding various unconventional and conventional medical treatments. The website contains a lot of useful information. Occasionally I disagree with articles posted on QuackWatch. The website used to list the late Robert Atkins as a quack, despite several studies having shown that his diet causes the fastest weight loss among many diets. Nonetheless, overall the information presented on QuackWatch is sound and trustworthy.

Finally, credibility of a scientific study’s results is higher when the authors do not have financial conflicts of interest (also known as competing financial interests) associated with publication of the study. Let’s say a research article demonstrates beneficial effects of a drug, and the authors are employees of the pharmaceutical company that makes the drug. This article will be less credible than a similar one published by academic researchers with no financial ties to the drug company. Research shows that studies funded by the pharmaceutical industry are more likely to report beneficial effects of a drug produced by the sponsor and to underreport adverse effects [34]. For example, a research article is comparing the effectiveness of drugs A and B. The article is likely to show that drug A is better than drug B if the funding for the study comes from drug A’s manufacturer. Another group of investigators, whose research received funding from the manufacturer of drug B, publishes a similar study. But this paper will report that drug B is superior to drug A. Both articles can use rigorous statistics and methodology, pass peer review, and be accepted for publication in respectable journals. There are real-life examples of this kind of biased research [35], known as the “funding bias.” Even double-blind randomized controlled trials are not always immune from this bias. In 2007, a research division of pharmaceutical company Eli Lilly & Co. published a promising study of a novel antipsychotic drug in a prestigious scientific journal [935]. This double-blind randomized controlled trial included 196 schizophrenic patients and the results were both clinically and statistically significant. Many in the scientific community viewed this study as revolutionary because it reported a new class of antipsychotic drugs, based on a mechanism different from all previous classes of these drugs. Sometime later, in April 2009, Eli Lilly announced that the second trial of this drug in schizophrenia failed to show better than placebo benefits. Thus, either the results of the first clinical trial were a rare coincidence or the funding bias had crept into that first trial. We can conclude that several independent groups of investigators should repeat an experiment and reproduce the findings in order to prove the validity of any type of scientific results. Note that the presence of competing financial interests does not necessarily mean that the reported results are always biased; only that there is a chance that the funding bias is present.

An author’s royalties from a book, for example, by themselves do not constitute a competing financial interest. Authors of scientific articles do not receive royalties from their sale by the publisher, but these authors, nonetheless, have clear economic rewards from publishing. For someone working in academia her amount of published work determines job promotions, access to research funding, and size of salary. Health care authorities do not consider these economic motives conflicts of interest. On the other hand, if a research paper or book presents in a favorable light some product the author sells—this is a competing financial interest. So is promotion of a product made by a company where the author is a shareholder.

To summarize, an experiment will produce strong evidence of a beneficial effect of a treatment (a drug, diet, or medical procedure) if:

According to these criteria, most of the evidence from my self-experimentation presented in this book is weak. Let’s say a single participant reports using the mental clarity questionnaire twice: before embarking on the modified high-protein diet and after 3 weeks of the diet. This study has a control group: the subject serves as his own control when tested without treatment, or before the diet. The study, however, is not randomized and not blinded, which is a minus. Even though a placebo control is not always possible in nonpharmacological studies, the author could still conduct a blinded study with a well-designed control diet. The author didn’t do this. He is reporting the results in a book, not in a scientific journal, which is a minus. The book is not directing readers to buy any goods or services from the author and all proposed techniques are nonproprietary (not protected by patents): a plus. The author has some background in biomedical sciences and QuackWatch.org has not blacklisted him (at least not yet): a plus. The number of test subjects is small (one), and therefore the results are statistically insignificant. But the author claims that if he repeats the experiment 10 times or more, it produces the same result. This is somewhat (but not exactly) similar to testing the diet once on 10 different test subjects. These results are more convincing than a single trial on a single person, but the evidence is still weak.

It would be relevant to mention epidemiological studies, a different category of study on human subjects. A detailed discussion of epidemiology is outside the scope of this book (and I am not an expert either). In brief, an epidemiological study conducts no actual experiment but explores in various ways existing statistics about some segments of the population. The goal is to identify correlations (associations) among some factors; for example, between smoking and life expectancy or between consumption of red meat and cancer. Only a tiny minority of these studies can show that one factor is likely to cause the other—the studies that satisfy many or all of the so-called Bradford–Hill criteria [36]. Most show a correlation, not causation. (Reference [37] contains a list of epidemiology-based lifestyle recommendations that scientists refuted by randomized controlled trials.) The mass media occasionally report epidemiological studies in misleading ways. A journalist can report a statistical correlation as a causal relationship, even though the authors of the research paper make no such conclusion. For example, a study may show a correlation between some personal habit and some disease; that is, people who have some habit are more likely to have some disease. Your evening TV news program may report this as “so-and-so habit can cause such-and-such disease” or “such-and-such habit increases the risk of so-and-so disease.” In actuality, if the research article does not prove that the habit is likely to cause the disease, then the reported correlation does not necessarily imply causation. The habit and the disease may have no causal relationship whatsoever between them. A third, unidentified factor can be the real cause of both of them. Alternatively, the disease may have symptoms that make the patients more likely to adopt this habit. In other words, it is the disease that causes the habit, not the other way around. Tobacco smoking and schizophrenia are a good example. Up to 80 to 90% of schizophrenic patients are smokers, but to date, it remains unclear which causes which. Some studies suggest that smoking is a form of self-medication with nicotine by the patients, and thus it is possible that schizophrenia leads patients to smoking.

Keep in mind that a statistical correlation between a habit and a disease is necessary, but not sufficient for the existence of a causal relationship between them. If there is no statistical association between a habit and a disease, then there is a 99% chance that the habit does not cause the disease. On the other hand, if the statistical correlation does exist, then the habit may cause the disease. A randomized controlled trial will be necessary to either prove or refute this hypothesis. We can conclude that one should exercise caution when interpreting results of epidemiological studies.

OK, what you saw above is an idealized or sanitized version of science. In conclusion, I will describe what is really going on, according to my ~12 years in academic science (biomedical research, not on human subjects). I published 13 scientific articles in peer-reviewed journals, most of these publications as a first author. I spent about an equal amount of time at Russian and American academic institutions (6 years). From what I have seen, scientific fraud is widespread both in Russia and in the United States. Younger researchers, such as grad students and postdocs, falsify research data because they have to “publish or perish.” Grad students do not have to publish so much as they have to show valid results and defend the doctoral thesis. Professors or heads of laboratories commit fraud because they have to renew their research grants, namely, because the results they publish should match the hypothesis stated in the original grant proposal. If the results do not confirm that hypothesis, then the research funding will not be renewed or will be much more difficult to renew. We are talking about a half a million to a million dollars of research funds lost. Consequently, the grantee’s status at the academic institution will diminish, and he or she may lose lab space and not receive a promotion or tenure.

Among my own publications, I suspect (but cannot prove) that one published paper contains fraudulent results, which were doctored by my lab supervisor in Russia. This is because I had a risky project and ended up with negative results in the lab up to the end of my Master’s program and had nothing to graduate with. My supervisor told me to let him conduct one key measurement alone, which took place in another building. He came back with positive results. After that, I graduated and went to the States where I was already accepted to a PhD program. The other twelve of my publications do not contain fraud as far as I know, but while in the U.S., I asked one journal to retract one paper, which was related to the topic of my PhD thesis and listed me as a coauthor without my permission. The chief editor of this American journal was a friend of the person who doctored the results; this American professor inserted my name into the list of authors without my permission (the fraudulent paper contained some of my results, not fraudulent). The chief editor started asking me questions why I want the paper retracted, but at this moment, the professor in question intervened, and all of a sudden, the chief editor of the journal offered me a deal: to remove my name from the paper (instead of the questions). Not willing to get caught up in an unpleasant and lengthy fraud investigation, I agreed to simply remove my name. About a year later, I changed my mind and submitted a formal complaint about scientific fraud to the same chief editor, requesting retraction of that fraudulent paper. I accused the professor and his postdoc of falsifying the results. The chief editor (to his credit) offered to recuse himself from the investigation, but I let him do it himself. He conducted a laughable investigation and presented me with a ridiculous rebuttal (which made no sense) from the accused professor. The chief editor ruled that my accusations are demonstrably false and the case is closed. The fraudulent paper with bad results is still available for everyone to read, without my name on it. The next step for me was to submit a fraud complaint to the NIH, which provided funding to the accused professor. I dawdled for several months, weighing my options. Then this professor’s daughter committed suicide by jumping in front of a subway train (she had a bipolar disorder for many years and I met her at his house a few years before). At this point, I decided that he was punished enough; it seemed to me that one more fraudulent paper in the sea of bad science was not going to make much difference anyway. So I dropped this case.

This particular fraud was possible because the professor in question prefers to hire foreigners almost exclusively. My guess is that this approach is convenient because foreigners are totally dependent on the employer for the visa and later immigration procedures and are willing to take a lot of abuse. They cannot quit their job or report fraud without losing the visa and destroying their chances for a green card. Even after the non-American postdoc or grad student leaves the lab and goes to work somewhere else, he or she is still dependent on the American professor for many years later because the professor will be writing various recommendation letters to the immigration authorities, until the foreigner gets a green card. Such a professor hires foreigners and then promotes and rewards those who produce copious results that support the hypothesis of the grant proposal (two or three publications per year is good productivity for a postdoc). Conversely, this professor ousts employees or grad students who produce the results that contradict the funded research projects and thus interfere with renewal of the grants (these negative results are never published). A good half or more of the articles published by such “productive and successful” professors are fraudulent results, which will waste the time and efforts of the scientists who will read these papers, not to mention that such “self-renewing” research grants are a total waste of tax-payers’ money. This bad science serves two purposes: (i) to publish as many articles as possible and as quickly as possible for advancement of a researcher’s academic career and to get research funding; (ii) to renew the research grants of this person. On the basis of what I have seen in the places where I worked and in other labs, I would say that 60–80% of research articles contain bad results either because of fraud (~30% of articles) or because of bad methodology (another 30–40% of articles). Note that this figure is still much better than the 99.9% of bad information that you will get from non–peer-reviewed literature, such as nonfiction books and web pages.

My estimates are supported by some recent studies showing that the results of only about 10–30% of scientific studies are reproducible, depending on the field [10281031]. This means that the other 70–90% of published research is fraudulent, flawed, or both. David Healy and other researchers have exposed various shenanigans of pharmaceutical companies: ghostwriting of scientific articles, permanent inaccessibility of original raw data, selective publication of clinical trials (not publishing trials that yield negative results), miscoding and mislocation of data, use of statistics to obscure the lack of access to data, misrepresenting adverse effects of drugs as symptoms of the disease, misrepresenting withdrawal effects as a relapse, misrepresenting older more effective and cheaper drugs as less effective, and many other tricks. If you wish to find out whether any particular study offers valid results, you can search the website PubPeer.com, which offers reviews of scientific articles after they were published: so-called postpublication peer review. You can post an anonymous review yourself if you want. PubMed.gov now also allows for commenting on articles. To see the latest news about scientific fraud, visit RetractionWatch.com.

In conclusion, it is worth mentioning that I endorse the scientific method, but I do not automatically agree with every theory or guideline handed down by the scientific and medical establishment. Money, politics, and the way science is currently organized sometimes lead to distortions and adoption of false theories by the establishment (a good example is the currently rejected theory that dietary cholesterol causes heart disease). Because peer review is not very effective at filtering out flawed studies [1034], corporations and the government can institute “widely accepted” (but false) theories by funding some types of biased research. Thus, the best you can do is keep an open mind and do your own research (carefully read each study yourself). Regarding the reliability of various kinds of evidence, see the section “Should you believe newspapers?” in Chapter Seven and Appendix XI “The scientific establishment can be wrong.”

Key points:

Can mental and physical exercise make you smarter?

Don’t get me wrong, I am not against physical exercise, in fact, I exercise for ~40 minutes four times a week, and there is plenty of evidence that physical exercise is good for health, in particular, it can be used to treat depression and anxiety [1047, 1048]. There are a lot of books claiming that physical exercise and brain training (mental exercises) will improve brain function and make you smarter. These claims are based on statistical or correlational studies ([957, 958] and Table 2 in [959]) and other circumstantial evidence, which cannot prove the point one way or the other. Direct experiments, on the other hand, show that physical exercise programs do not produce consistent benefits in academic performance and intelligence. Physical exercise programs have no effect on IQ scores of people suffering from mental retardation [960, 961]. Numerous studies on healthy children and adults have yielded inconclusive results ([962964] and Table 1 in [959]). If we are talking about becoming smarter, fluid intelligence is the most relevant mental ability. Exercise programs do not improve (academic) fluid intelligence or the improvement is tiny and statistically insignificant [960964]. In the study by Mohamed Elsayed and colleagues [963] the total difference in fluid intelligence before and after an exercise program was not statistically significant (36 test subjects total) and the size of the improvement was miniscule. One component of fluid intelligence out of four remained unchanged, one decreased, and two increased slightly. In the study by A.K. Brown and colleagues [962] an exercise program, which included 82 test subjects, did not produce a statistically significant change in any components of fluid intelligence. According to Table 1 of the article [962], there was a slight increase in two components and a minor decrease in two other components of fluid intelligence in the exercise group. We can conclude from these studies that the cognitive benefits of physical exercise either do not exist or they are too small. Physical exercise will not make you smarter.

There may be some exceptions or qualifications to the above conclusion. Sometimes mental abilities are impaired as a result of a physical illness or a physical disability. In this case physical exercise may help to restore the cognitive abilities to some extent [896, 965, 966]. This may happen by means of the improvement of the general state of health (this is recovery from an impairment, not enhancement of the norm). Other studies show that physical exercise has transient effects on some mental abilities, such as information processing speed and attention control [967969]. These effects disappear within one to two hours after a bout of exercise. A recent article shows that an exercise program can improve memory in healthy older people, but the size of the effect is small [970]. Still other studies have demonstrated that professional athletes perform certain tasks (related to their type of sports) more effectively than average people [971975]. Yet the studies listed in this paragraph do not show that physical exercise will increase intelligence of a healthy person.

As for mental exercises or “brain training,” the conclusions are similar. Two large studies in adults have concluded that mental exercises do not improve general mental abilities such as fluid intelligence, short-term memory, attention control and so on [917, 918]. The benefit is either miniscule or nonexistent. Mental exercises do improve the skills related to the type of exercise that a person performs. In other words, with time and practice, people tend to learn how to perform various mental tasks better and more quickly. Nonetheless, there is no improvement in the general mental abilities and in the performance of mental tasks unrelated to the mental exercises.

There are some exceptions too, which, however, do not change the overall conclusion. One recent study has found that half of children may improve their short-term memory and fluid intelligence as a result of a special type of mental exercises [976]. This benefit was almost undetectable in the other 50% of children and the average improvement in fluid intelligence (all children combined) was small. That mental exercises can improve brain function in some children is an interesting and important finding. Nonetheless, to prove the validity of this finding, one or more independent groups of researchers have to confirm it. In academic science, when a group of researchers reports an exciting finding, other laboratories often show that the results are false. Similarly, Helga and Tony Noice published a research article showing that theater training can improve mental abilities in older adults living in retirement homes [977]. Only a single group of researchers has done this experiment so far, and the study deals with recovery from an impairment, not enhancement of the norm. Numerous other studies fail to show a significant benefit from mental exercises [917, 918], and we can conclude that mental exercises do not enhance mental abilities in the majority of people.

An analogy with a desktop computer can illustrate this point. Mental exercises are similar to loading more software on the computer and making the computer work harder and longer. This approach cannot improve computer hardware: processor speed, memory, and the size of the hard drive—the characteristics that are “mental abilities” of the computer. Note that certain types of daily mental tasks do produce subtle changes in the brain structure of humans [972, 978]. Yet these alterations are a “change” or “adaptation,” not an “upgrade,” because they do not improve brain function in general. (To continue the computer analogy, if you make the computer work harder, the fan cooling the processor and the indicator of hard drive activity will be constantly on. These changes do not mean that you have upgraded the computer by means of the excessive work load.)

My own experience with physical exercise and brain training leads me to the same negative conclusions. Whereas physical health benefits of physical exercise are indisputable, this approach is unlikely to have any significant effect on intelligence, in the vast majority of cases.

Key points:

The natural nutrition theory of intelligence and its limitations

We reviewed some introductory information in the previous sections and now it’s time to introduce the first theory of several presented in this book. First, I would like to refresh your memory on the basics of the theory of evolution. This is necessary because the “natural nutrition theory of intelligence” is based on evolution. (Readers can skip the refresher by pressing the skip button or this link.)

Each form of life (a living organism) inherits traits (features) from its parents and this process involves genes. A mutation in a gene can cause an old trait to disappear or a new one to appear in the offspring. Evolution is the gradual change in living organisms over generations, a process mediated by genetic changes. Life forms of the same species often coexist as a group called a population, which occupies a certain geographic location or an ecological niche. Natural selection is a process of accumulation of those traits within a population that are useful for survival and reproduction of an organism. Natural selection also involves a process of gradual disappearance of the traits that impede successful reproduction and survival. Genetic changes (mutated genes) that provide an advantage to an organism become more widespread within the population because organisms carrying those mutations are more likely to survive and to procreate. The process of natural selection makes a population genetically adapted to its environment. Put another way, most of the individual plants or animals (or other life forms) within the population become well adapted to the environment that the population inhabits. Individuals become more adept at evading predators, at obtaining food, and at digesting the types of food that are present in a given environment. Separate populations of the same species of plants or animals can develop into new species if there is no genetic exchange (cross-breeding) between these populations, or if the genetic exchange is limited. For example, populations can become isolated geographically for an extended period of time that covers hundreds or thousands of generations. This change can form new species (a process called speciation). Charles Darwin was the founder of the evolutionary theory. His seminal work, “On the Origin of Species” (1859), laid the foundation of modern evolutionary biology and the related biological sciences. Later developments in genetics, archaeology, and other sciences led to the Modern Evolutionary Synthesis, which is the modern understanding of the process of evolution.

The rate of evolution (the speed of change) varies over time. The current mainstream view is that the rate of evolution is not constant. Evolutionary changes occur by leaps and bounds. Certain periods of time (tens of thousands of years) involve rapid changes, and then follow long periods when little or no change takes place (hundreds of thousands of years). Thus, short periods of rapid change alternate with longer periods of slow change. Nonetheless, evolutionary changes are gradual and take many generations (hundreds or more), even during periods of rapid change. Numerous studies have proven the evolutionary theory, and evolution by natural selection is an accepted fact among professional biologists [38].

Now we can get started on the natural intelligence theory (this is an abbreviation of the “natural nutrition theory of intelligence”). There are two notable differences between the diet of primates (and other animals) in the wild and a typical human diet. The first difference is that a typical human diet in industrialized countries is chockfull of various artificial ingredients that animals do not consume. These include food additives (salt, sugar, vinegar, nitrates, nitrites, monosodium glutamate, and others) and dietary supplements (artificial vitamins, minerals, and herbal extracts). The second difference is that animals living in the wild consume food that is raw (uncooked), while humans consume a predominantly cooked diet. In other words, modern primates [3941] and evolutionary predecessors of humans consume(d) a 100% raw diet that is free of any artificial chemicals [4245]. It seems logical to hypothesize that this sort of diet, or a similar diet, is more “natural” for the human brain than the typical modern diet. The food additives have been present in the human diet for less than several centuries [43], whereas the cooking of food has been with us for about 300,000 years [42, 46]. From the standpoint of evolution, this amount of time may not be sufficient for full genetic adaptation to this novel mode of nutrition. In other words, it is possible that a raw diet that is free of artificial ingredients will improve mental abilities of modern humans. For convenience, we will refer to this diet as the “ancestral diet” throughout the book.

At this point some readers may have gotten the impression that this text is advocating abandoning all food safety guidelines and recommending a 100% raw food diet containing meat and fish. Please note that this book does not recommend this sort of diet.A In particular, readers should avoid consuming raw animal foods such as raw meat, raw fish, or raw milk because these products carry a risk of serious infectious disease (see Table 1 below). In my view, it is not necessary to subject oneself to this sort of risk in order to improve mental abilities. This book advances some arguments in favor of diets that are safe and at the same time chemically similar to the ancestral diet. It is also possible to improve mental abilities by reducing food intake without following any strict diets, as you will see in the last section of this chapter.

Table 1. Common pathogens that can occur in raw animal products. (To skip the table, press the skip button or this link.)

Pathogen: Campylobacter jejuni (bacterium)

Source: raw beef, poultry, and dairy
Symptoms of infection: abdominal pain, diarrhea, nausea, and vomiting

Pathogen: Clostridium botulinum (bacterium)

Source: seafood (such as improperly canned goods; smoked or salted fish)
Symptoms of infection: double vision, inability to swallow, difficulty speaking, and inability to breathe; can be fatal

Pathogen: Clostridium perfringens (bacterium)

Source: raw meats
Symptoms of infection: abdominal cramps and diarrhea

Pathogen: Cryptosporidium parvum (single-celled parasite)

Source: raw dairy
Symptoms of infection: without symptoms or watery diarrhea, stomach cramps, upset stomach, and slight fever

Pathogen: Escherichia coli, strain O157:H7 (bacterium)

Source: raw beef and pork
Symptoms of infection: abdominal pain, diarrhea, nausea, and vomiting

Pathogen: Giardia duodenalis (single-celled parasite)

Source: raw dairy
Symptoms of infection: without symptoms or diarrhea, abdominal cramps, and nausea

Pathogen: Listeria monocytogenes (bacterium)

Source: raw pork, poultry, dairy, and seafood
Symptoms of infection: abdominal pain, diarrhea, nausea, and vomiting

Pathogen: Norovirus (Norwalklike virus)

Source: raw oysters, shellfish
Symptoms of infection: diarrhea, nausea, vomiting, stomach cramps, headache, and fever

Pathogen: Norwalk virus

Source: raw seafood
Symptoms of infection: nausea, vomiting, diarrhea, and abdominal pain; headache and low-grade fever may occur

Pathogen: Opisthorchis felineus and Opisthorchis viverrini (flatworms)

Source: raw fresh-water fish
Symptoms of infection: fever, nausea, pain on the right side of the abdomen; possible obstruction of the bile duct

Pathogen: Salmonella (bacterium)

Source: raw beef, pork, poultry, eggs, dairy, and seafood
Symptoms of infection: abdominal pain, diarrhea, nausea, and vomiting

Pathogen: Shigella (bacterium)

Source: raw dairy products
Symptoms of infection: nausea, vomiting, fever, abdominal cramps, and diarrhea

Pathogen: Staphylococcus aureus (bacterium)

Source: raw dairy products, beef
Symptoms of infection: nausea, vomiting, fever, abdominal cramps, and diarrhea

Pathogen: Taenia saginata (tapeworm)

Source: raw beef
Symptoms of infection: without symptoms or abdominal pain, weight loss, digestive disturbances, and possible intestinal obstruction; irritation of perianal area

Pathogen: Taenia solium (tapeworm)

Source: raw pork
Symptoms of infection: without symptoms or abdominal pain, weight loss, digestive disturbances, and possible intestinal obstruction; irritation of perianal area; infection of some tissues (other than intestines) with larvae is possible and can be fatal if involves central nervous system or heart

Pathogen: Toxoplasma gondii (single-celled parasite)

Source: raw pork, lamb, and wild game
Symptoms of infection: without symptoms or “flulike” symptoms such as swollen lymph nodes or muscle aches; immunocompromised patients can develop severe toxoplasmosis: damage to the eyes or brain

Pathogen: Trichinella spiralis (intestinal roundworm, larvae can form cysts in muscle tissue)

Source: raw pork, wild boar, bear, bobcat, cougar, fox, wolf, dog, horse, seal, and walrus
Symptoms of infection: nausea, diarrhea, vomiting, fever, and abdominal pain, followed by headaches, eye swelling, aching joints and muscles, weakness, and itchy skin; severe cases: difficulty with coordination, heart and breathing problems, can be fatal

Pathogen: Vibrio cholerae (bacterium)

Source: raw seafood
Symptoms of infection: without symptoms or symptoms of cholera: severe diarrhea, vomiting, and leg cramps; severe dehydration and death can occur without treatment

Pathogen: Vibrio parahaemolyticus (bacterium)

Source: raw shellfish, other seafood
Symptoms of infection: chills, fever, and collapse

Pathogen: Vibrio vulnificus (bacterium)

Source: raw shellfish, other seafood
Symptoms of infection: chills, fever, and collapse

Pathogen: Yersinia enterocolitica (bacterium)

Source: raw meats and seafood
Symptoms of infection: bloody diarrhea, nausea, and vomiting

Going back to the natural diets of animals in the wild, the first primates appeared 60 to 70 million years ago. Humans belong to the order of primates, although we appeared much later in the course of evolution. All known mammals living in the wild today, including primates, consume a raw diet consisting of plants, animals, or both. For a brief period after birth they consume mother’s milk. They do not consume artificial ingredients (pure chemicals such as food additives and dietary supplements) aside from those that enter their diet by accident through environmental pollution. Early hominids or “great apes” appeared approximately 15 million years ago and were the evolutionary predecessors of Homo sapiens. Great apes most likely consumed a diet that resembles the natural diet of modern chimpanzees: raw plants, raw meat and fish and no dairy at the adult age [3941]. These early hominids most likely did not consume any artificial ingredients because they did not know how to manufacture those chemicals. Homo species did not know cooked food until predecessors of Homo sapiens mastered cooking with fire about 300,000 years ago [42, 46]. These immediate predecessors as well as the first Homo sapiens (who appeared approximately 100,000 years ago) started consuming cooked food, but there is no evidence that they consumed any artificial ingredients. Domestication of cattle and cultivation of wheat 10,000 to 11,000 years ago led to consumption of dairy by humans at adult age and widespread consumption of cereal grains [43]. Humans started consuming salt, one of the first pure chemicals in the diet, approximately 6000 B.C. [47]. A small number of chemicals, such as potassium nitrate (a preservative), entered the Western diet during the Middle Ages. A large influx of various artificial ingredients into the human diet (several hundred food additives, at last count) occurred during the Industrial Revolution in the last several centuries. During this period the manufacture of sophisticated chemicals came into its own. In summary, cooked food entered the diet of humans approximately 300,000 years ago, while most of the artificial ingredients entered the Western diet during the last several centuries. By “Western diet” or “modern diet” I mean the diet recommended by official food pyramids, such as the U.S. Department of Agriculture’s MyPlate.

To sum up, Homo sapiens (appearing approximately 100,000 years ago) had evolved from hominids or great apes (appearing approximately 15 million years ago). During this process, these primates stayed on a diet that was 100% raw and free of artificial ingredients (the “ancestral diet”). Cooked food appeared earlier than the first Homo sapiens did (300,000 years ago versus 100,000 years ago). Nonetheless, the fact remains that it took roughly 15 million years for the Homo sapiens species to evolve from the earliest hominids. Most of that evolutionary time, these primates lived off a raw diet free of any artificial chemicals.

Evolution by natural selection ensures that a species becomes well adapted to its environment, including the food available in that environment. Homo sapiens must be well adapted genetically to the type of nutrition that was characteristic of the hominids during the past 15 million years. Therefore, it seems logical to theorize that the human brain will function optimally on the ancestral diet. Conversely, it is conceivable that the human brain is not well adapted to nutrition that includes cooked food and artificial ingredients. This is because cooking and artificial chemicals are recent innovations from the standpoint of evolution. It may take several million years of natural selection for humans to adapt genetically to this new mode of nutrition. Right now the process of natural selection is ongoing. Unnatural nutrition is similar to putting the wrong kind of fuel in the gas tank of a car [48]. According to this theory, if we compare two random diets, the human brain will function worse on a diet that is more cooked (i.e. the percentage of cooked food in the diet is higher). The brain will be worse off if the cooking involves higher temperatures or if the diet contains greater amounts of artificial chemicals. In a concrete and testable form, the most basic assumption of the natural intelligence theory is the following.

The ancestral diet (for example, a 100% raw diet that consists of fruits, vegetables, nuts, salt-water fish, and ground meat, and excludes any artificial ingredients such as food additives and dietary supplements), when used for 4 to 7 days, will improve fluid intelligence.

Some readers might say that this ancestral diet is too dangerous and thus untestable, as no ethics committee will ever approve this kind of experiment on human subjects. This is a valid concern. Endnote B describes possible methods, such as pascalization, that should make the ancestral diet safe in the near future. As you will see in later sections, I tested the above statement on myself repeatedly and it appears to be true. Nonetheless, readers should not (and do not have to) follow my bad example. I developed three “smart diets” that are safe and as effective or even more effective than the ancestral diet at different types of tasks. We will talk about these diets in detail in a later section of this chapter. The following implication of the natural intelligence theory may allow for testing of this theory at present:

The more similar a diet is to the ancestral diet, the greater the improvement of intelligence that this diet will provide.

A lower percentage of cooked food in the diet, a lower temperature of cooking, and the exclusion of artificial ingredients will all make a diet more similar to the ancestral diet. Therefore, people can modify and test the basic assumption of the theory safely if we replace the phrase “ancestral diet” with one of the “smart diets” in the italicized statements above.

One piece of evidence that supports the natural intelligence theory is the studies on the nutrition of infants. One study shows that infants who received “unnatural food” (i.e. baby formula) have an IQ when they grow up that is about 5 points lower compared to infants who consumed “natural food” (i.e. the mother’s milk) [49]. Another statistical study shows that infants who eat a healthy diet tend to have higher IQ scores at age 4 compared to infants consuming an average mixed diet [846]. There are other studies supporting the natural intelligence theory [826, 837, 838, 858, 863, 866, 908, 943]; we will discuss them later in this chapter.

I developed the natural intelligence theory independently. After some research, I found that it has some similarities to the theory behind the Paleolithic diet developed by brilliant researchers S. Boyd Eaton, Melvin Konner, and Loren Cordain [50, 51]. Both theories involve the concept of an ancestral diet that humans had adapted to in the course of evolution. Both theories imply that humans did not have sufficient evolutionary time to adapt to the modern diet, which is different from the ancestral diet. According to its authors, the Paleolithic diet (or Paleo diet) is the diet of humans during the Stone Age, or more than 11,000 years ago. It excludes foods that entered the human diet with the widespread adoption of agriculture: dairy, grains, legumes, and most food additives as well. The theory behind the Paleolithic diet suggests that the ancestral diet of humans consisted of cooked meat and fish, raw and cooked fruits and vegetables, and nuts. (Some evidence emerged recently that humans consumed cereal grains during the Stone Age, approximately 30,000 years ago [923].) Note that documented fossil evidence of meat consumption by hominids dates back 3.4 million years [922].

There are several differences between the Paleolithic diet theory and the natural intelligence theory. None of the statements below represent criticism or a desire to demonstrate that “my theory is better.” First, the natural intelligence theory deals with mental abilities and ignores physical health implications of the diet. Second, the natural intelligence theory does not prescribe any specific proportions of macronutrients (fat, protein, and carbohydrates) in the diet. On the other hand, the authors of the Paleolithic diet believe that the ancestral diet contained well-defined proportions of macronutrients that a person should comply with in order to achieve optimal health. The latest proportions published by these authors are the following (by calories): 35% fats, 35% carbohydrates, and 30% protein. As you will see in later chapters of this book, the natural intelligence theory allows for wide variations in the proportions of macronutrients. These proportions can range from protein-free and low-fat to high-protein and high-fat diets, depending on a task in question. Third, the Paleolithic diet ignores the implications of cooking. This position is justified because the switch from an all-raw diet to a partially cooked diet (about 300,000 years ago) occurred well before the adoption of agriculture (11,000 years ago). Fourth, the authors of the Paleolithic diet assume that saturated fat is bad for health and therefore lean animal products are preferable. The natural intelligence theory takes no position on saturated fat. As you will see in a later section of this chapter, there is plenty of evidence that saturated fat has no adverse effects on health. Thus, either exclusion or inclusion of foods that contain lots of saturated fat is optional, according to the natural intelligence theory. Fifth, the theory behind the Paleolithic diet suggests that dairy products and cereal grains are unnatural for humans because they appeared in the human diet recently (in the last 10,000 years, the agricultural era). The natural intelligence theory, on the other hand, suggests that a person can include novel foods of natural origin (plants, animals, mushrooms, milk, and so on) in the diet if one or more of the following conditions are true.

  1. These novel foods do not contain substances toxic to humans in the raw form and are free of artificial ingredients.
  2. You can cook or pasteurize them and this does not result in formation of undesirable chemicals. (There is evidence that this is true of fruits, vegetables, and dairy.)
  3. Practical testing shows that these novel foods do not worsen psychological well-being or mental performance in the short term.

In most cases, one can verify the latter condition by following a monodiet for 3 days or longer (Appendix III). My personal experience suggests that raw extract of wheat and pasteurized milk (and pasteurized cultured milk) are nutritious and do not have negative effects on mental abilities. Yet these foods do not fit the strict definition of ancestral food. As explained later, boiled grains are useful for some mental tasks. Yet boiled grains contain undesirable chemicals and do not fit the criteria of either the natural intelligence theory or the Paleolithic diet.

The seeming simplicity of the natural intelligence theory is attractive, but careful examination reveals that it has a number of limitations. (Readers can skip the detailed discussion of the limitations and read about them in the key points later: press the skip button or this link to jump to the end of this section.)

  1. Humans may have adapted to cooked food, at least partially, through natural selection during the last 300,000 years (corresponding to the use of fire for cooking) [46]. Nonetheless, humans most likely did not have sufficient evolutionary time to adapt to the numerous artificial chemicals that entered the human diet a few centuries ago [43]. Put another way, cooked food may be “somewhat natural” to modern humans, while food additives are still “unnatural.” There are genetic differences among individual human beings, and some people can function very well on the modern “unnatural” diet. It is also impossible to determine with any degree of certainty what a “natural diet” of Homo sapiens is. Studies suggest that contemporary primitive diets vary in their composition, even when free of artificial ingredients. Examples include diets that consist mostly of plant foods and diets that consist almost exclusively of animal products (for example, the diet of Eskimos). These observations suggest that humans are adapted to diets that vary widely—both in their content of plant or animal food and in their proportions of macronutrients. There are segments of the world population consuming high-fat and low-fat, high-protein and low-protein, all-flesh and all-plant diets. For the sake of convenience, “natural nutrition” in the context of this book means a diet that is raw or almost raw and free of all artificial ingredients. Keep in mind, however, that many different diets can fit the criteria of “natural nutrition” as just defined. Some people can function exceptionally well on the modern conventional diet, as if it were a “natural diet” for them. Therefore, it is difficult to say what a “natural diet” for humans is.
  2. Raw animal products carry a risk of serious infectious diseases [5254]. Thus, the risk of serious illness (Table 1) may outweigh any possible benefits from this food source. During the last 300,000 years (the cooking era), the human immune system may have grown unaccustomed to the pathogens that occur in raw animal foods such as meat and fish. Therefore, the immune system cannot provide adequate defense against such pathogens. In other words, raw animal products may have become “unnatural food” for modern humans, at least for those living outside the Arctic Circle. Northern peoples such as Chukchas and Inuit consume raw animal foods on a daily basis and do well on this sort of diet [5558]. On the other hand, it is possible that the immune system of humans did not change during the last 300,000 years. In this case it is possible that cooking animal products provides more benefits for health (food safety and better digestibility of food) than negative effects. Thus, based on current medical science, humans will enjoy healthier and longer lives if they continue to cook all animal products. Nevertheless, the natural intelligence theory suggests that if raw animal products are free from pathogens, then this food should provide a sustained improvement of mental abilities. Another obstacle for the raw diet is that raw animal food is socially unacceptable in most cultures. This state of affairs is expected to change when this type of food becomes safe in the near future. Endnote B describes possible technological developments, such as pascalization, that should ensure the safety of uncooked animal products.
  3. The natural intelligence theory may sound similar to but is not the same as naturopathy or “natural medicine.” There are several differences. Unlike naturopathy, the natural intelligence theory is not intended for diagnosis, prevention, treatment, or cure of any disease. The natural intelligence theory falls under the jurisdiction of psychology, not medicine. Nevertheless, this theory may have some applications in psychiatry. A critical overview of naturopathy is available on the QuackWatch website. As an aside, in my view, a common misconception among advocates of “natural” healing methods is the following assumption. “An unnatural diet (or unnatural lifestyle) has caused a disease, so a return to the natural diet (or natural lifestyle) will cure the disease.” Although the first part of the statement is often true, the second part of the statement is not always true. The assumption just stated is incorrect if the pathological changes are severe or irreversible. For example, if exposure to carcinogens caused a tumor, then a simple withdrawal of carcinogens will not cure the cancer. Chemotherapy, surgical treatment, or radiotherapy will be necessary to cure the disease or to produce a significant clinical improvement. The assumption may be true, however, in some cases when the pathological changes caused by an “unnatural lifestyle” are minor or reversibleC (e.g., vitamin deficiencies, some mental disorders). It is noteworthy that adherence to Harvard’s Healthy Eating Pyramid can reverse the so-called metabolic syndrome [921]. The latter is the combination of excessive belly fat, high blood pressure, and unhealthy levels of blood lipids and blood sugar. A general improvement of the quality of nutrition was recently shown to alleviate depression in psychiatric patients [1044], and there is plenty of evidence that physical exercise can be used treat depression and anxiety [1047, 1048].
  4. At first glance, prescription drugs are “artificial ingredients” in the diet because most drugs are pure chemicals. The fourth limitation of the natural intelligence theory is that people with chronic illnesses who take medication will not improve their mental abilities if they discontinue the medication. This is because their general state of health is likely to deteriorate if they stop taking the medication. Illness usually causes a drop in IQ scores. Prescription drugs are an integral part of health care. It is true, however, that a number of physicians and biomedical researchers have expressed concern over how pharmaceutical companies are developing and marketing drugs. In many cases, the benefits of pharmaceutical drugs are exaggerated, whereas adverse effects are not fully and honestly reported by their makers. This is especially true for psychiatric drugs, most which do more harm then good [1045, 1046]. For this reason, prescription drugs should have a much smaller place in health care than they do today. Nonetheless, not all prescription drugs are bad or useless. The pharmaceutical industry has many flaws, but prescription drugs are not one and the same as the pharmaceutical industry. If you dislike drug companies, you should not oppose all prescription drugs, especially older and cheaper generic drugs. Compared to patented drugs, generic drugs bring much smaller profits to drug companies. If you are concerned about these corporations getting rich from your suffering, you shouldn’t be. There are two excellent articles written on this subject by Dr. Arnold Relman, a former editor-in-chief of the New England Journal of Medicine [59, 60]. Although prescription drugs can have side effects, so can other treatments, including “natural therapies.” Dietary changes and hydrotherapy techniques described in this book may appear to be “natural” treatments, but they can have side effects, as you will see later. Drugs do not always have side effects. Some people can take high doses of painkillers, such as aspirin and acetaminophen, for weeks without any noticeable side effects. (This is not true for everyone.) The favorite phrase of the proponents of alternative medicine is that “drugs do not address the cause of an illness; they only treat symptoms.” There are several problems with this statement. First, the cause of many illnesses is either unknown or the result of an unknown combination of genetic and environmental factors. Wild animals suffer from many diseases that afflict humans, including viral and bacterial infections, cancer, and neurodegenerative diseases. This observation rules out an “unhealthy lifestyle” as the only cause of human diseases. If the cause of an illness is unknown, it will be irrational to not treat symptoms and to subject the patient to unnecessary suffering. A cause of an illness is often known, for example, a genetic mutation or extreme old age, but uncorrectable. In some cases, doctors know the cause of an illness and can eliminate the cause, but this action will have no effect on the disease in question. Other interventions will be necessary. An example is exposure to occupational or environmental carcinogens that causes cancer. Second, many drugs do deal with the cause of an illness, for example, antiviral drugs and antibiotics. Third, even when doctors know the cause of an illness (for example, influenza virus) and know how to treat it (e.g., with antiviral drugs), the treatment may not relieve symptoms right away. Again, it is irrational and cruel to not treat the symptoms and to thereby subject the patient to unnecessary suffering.
  5. The natural intelligence theory implies that there are “natural” and “unnatural” types of food. One can argue that humans and everything they produce are a part and product of nature (according to the evolutionary theory). Therefore, all types of modern human food are “natural,” including junk food. For simplicity’s sake, “unnatural food” in this book means food that contains food additives, has gone through complex chemical processing, or has undergone cooking at temperatures above 100°C (212°F). This distinction is arbitrary and a convenience in the context of this book. Nonetheless, this definition of “unnatural food” is based on evidence presented in the next two sections of this chapter.
  6. This book’s diets contain some food products that may appear to be “unnatural.” For example, adult animals in the wild do not consume dairy, but some of the proposed diets include milk. You could also say that buttermilk and kefir are “unnatural” because they are processed foods (cultured with special bacteria). Water extract of grains, vegetable oil, and cheese are “processed foods,” and thus may also appear to be “unnatural.” In my defense, I consider these food products “natural” because they are animal- or plant-source foods (natural origin) that did not undergo complex chemical processing. Culturing of milk with special bacteria (kefir and buttermilk) is “natural” because it can occur in dairy products without human intervention. Removal of insoluble indigestible components (for example, juicing; water extract of wheat lacks insoluble fiber) does not affect chemical composition of nutrients. Therefore, this method also belongs in the “natural” category.
  7. Nutrition is a social activity in most cultures, and numerous conventions and implicit rules govern this activity. A diet, however attractive it may be in theory, can cause problems in social relations of the dieter if this diet violates acceptable norms of behavior in a given social context. The natural intelligence theory is consistent with the idea that a person does not have to follow a strict diet on a permanent basis. This is because even a temporary improvement of mental abilities can have long-term benefits. For example, a person may decide to follow a strict diet for several days and can come up with a detailed plan for the upcoming several months. While good mental clarity is necessary for the planning phase, it is not as crucial for execution of the plan. Based on my experience, you can achieve sufficient improvements in mental abilities if you follow a strict diet for several days a week or several days per month. Most of the time, you can follow some conventional dietary regimen, such as Harvard’s Healthy Eating Plate [61] or the USDA’s MyPlate. A diet should not create more problems than it is supposed to solve. Diets are not the only way to improve mental abilities. Other techniques include food restriction methods and cold and hot hydrotherapy.
  8. Some types of raw food (for example, potatoes and mushrooms) contain toxic substances, which the cooking destroys. Thus, “raw” does not always mean “better.”
  9. Fluoridation of water is not harmful [815] and the same applies to iodination of table salt. You can add small amounts of salt to food if necessary (see Appendix I for recipes). There is no evidence that table salt impairs mental abilities. You can ignore the presence of dietary supplements in dairy (e.g., vitamin D [930] or calcium) if your consumption of dairy is in the moderate range. (For example, if you drink one to two glasses of milk a day.)
  10. Food that is free of artificial ingredients and is raw or cooked at moderate temperatures is not the most delicious food. (For somebody who is accustomed to this sort of nutrition, the taste can be pleasant.) With some resourcefulness and the right recipes (Appendix I), you can make this “smart food” delicious. In other words, food can have both a pleasant taste and a beneficial effect on mental abilities—it does not have to be either–or. Nevertheless, despite their negative effects on mental abilities, food additives and sophisticated cooking methods can make food pleasurable. Enjoyment of tasty food is often a “social ritual” and is one of the important sources of pleasure in life. Therefore, you shouldn’t use strict diets for extended periods of time. The key is keeping appetite under control when consuming tasty food and keeping the proportion of junk food in the diet under 2 to 5%. The last section of this chapter describes some techniques for controlling appetite and restricting consumption of food.
  11. Genetically modified food and food that is not “organic” may appear to be “unnatural.” In actuality, my experience suggests that there is no difference in taste and in subjective effects on mental state between genetically modified and genetically unmodified food. During digestion, the modified genes, carbohydrates, and proteins in genetically modified food break down into simple components such as nucleotides, nucleosides, amino acids, simple sugars, and so on. Digestion of genetically unmodified food produces virtually the same mixture of simple ingredients as that produced during digestion of genetically modified food. Therefore, there is no reason to be afraid of genetically modified food. There is no detectable difference between organic and nonorganic food either [842]. Agricultural chemicals are present in the final product in extremely low amounts, which are undetectable by taste. In contrast, most artificial ingredients such as salt, sugar, and preservatives are present in food in amounts noticeable by taste. Thus, what you add to food and the way you cook it have more influence on chemical composition of food than the “organic” or “genetically unmodified” status of food.
  12. Dietary fiber supplements are a special category of nutritional supplements that are indigestible by humans. Therefore, people can use these supplements without any fear that they may adversely affect mental state. These include microcrystal cellulose, psyllium husks, and other indigestible types of dietary fiber (be careful with digestible types of fiber, such as apple pectin). In your alimentary tract, the action of indigestible fiber is purely mechanical (propulsion and lubrication). Fiber supplements are useful in the context of high-protein diets, as we will see in subsequent chapters. You could say that fiber supplements like psyllium husks and water extract of raw wheat are “natural” rather than artificial components of the diet. This is because people prepare them by means of a simple mechanical process.
  13. The natural intelligence theory ignores stuff applied to the body externally, such as balms, cosmetics, antiperspirants, and the like, and household chemicals, such as air-fresheners. If you use these products as instructed by the manufacturer, they will not affect either mental state or mental abilities.

The natural intelligence theory has several implications. First, complete exclusion of all artificial ingredients from the diet (with some of the above exceptions) is safe and feasible and will improve intelligence, according to this theory. Second, food that is cooked at moderate temperatures (e.g., by boiling or steaming) will be more beneficial for mental abilities than food cooked at high temperatures (for example, by frying, baking, or grilling). This is because the high temperature treatments cause greater changes in the chemical composition of food, making it less “natural,” as it were.

If the natural intelligence theory has any validity, there should be evidence that cooking of food and the presence of artificial ingredients in the diet can worsen mental abilities. The next two sections review this evidence.

Key points:

Artificial ingredients in the diet and their effects on mental performance

Let me clarify the meaning of the term “artificial ingredients.” The word “artificial” means in this context that humans used a sophisticated chemical process to create these ingredients. For example, sugar beets may be a man’s doing, but they do not undergo a sophisticated chemical process. Therefore, sugar beets are not an artificial ingredient. On the other hand, refined sugar is an artificial ingredient. Many of these artificial ingredients are pure chemicals and can be called “food additives.” This book uses the terms “artificial ingredients” and “food additives” interchangeably. Dietary supplements such as artificial vitamins and minerals are also in the category of artificial ingredients. Vitamins and minerals are often called “micronutrients” in order to distinguish them from macronutrients (carbohydrates, fat, and protein).

There is a long list of food additives (several hundred) that the health authorities have approved as safe to add to food in small amounts. This includes categories of chemicals such as acidity regulators, anticaking agents, antifoaming agents, antioxidants, color retention agents, emulsifiers, firming agents, flavor enhancers, flour treatment agents, food acids, food coloring, gelling agents, glazing agents, humectants, improving agents, mineral salts, preservatives, propellants, seasonings, sequestrants, stabilizers, sweeteners, thickeners, and vegetable gums. The additives that are most familiar are refined sugar (sucrose) and table salt (sodium chloride). You can find an up-to-date list of the artificial ingredients in the Wikipedia article entitled “List of food additives.”

Food additives do not have adverse effects on physical health at doses present in food products that you find in a grocery store. For this reason, regulatory agencies, such as the United States Food and Drug Administration, have approved these chemicals as safe for use in food products. Nonetheless, many food additives can have subtle effects on the functioning of the central nervous systemG in laboratory animals and in humans, as discussed below. (Readers can skip the detailed discussion of this topic and jump to the key points: press the skip button or this link.)

Before discussing the effects of food additives on mental abilities, I should say that readers need to view the results of the studies below with caution for two reasons. The first reason is that the dose of a given food additive in animal studies is not always equivalent to the dose that an average human receives with food. The second reason is that the physiology of laboratory animals, most often rodents, is not identical to human physiology. Therefore, research findings of adverse effects of a given chemical in rats are not always applicable to humans, although, usually, they are applicable to humans. It is not my purpose here to scare the reader into adhering to some diets by fear-mongering and exaggerating undesirable effects of food additives. Rather, this text is trying to make a point that a temporary elimination of food additives from the diet should improve mental abilities. But if you choose not to follow my advice and continue to consume food additives as usual, you are not going to become stupid or mentally ill. The research into the effects of food additives on mental state remains controversial, and you need to be careful when interpreting the results of such studies.

One of the pioneers of the research into the effects of food additives on human behavior was Dr. Benjamin Feingold, whose main area of expertise was allergies in children. In his clinical practice in the 1960s, he noticed that elimination of some food additives from the diet of children can not only reduce allergic reactions but also improve the behavior of some children. His initial studies were promising and he advanced the “Feingold hypothesis.” This theory suggests that some food additives such as food coloring agents, artificial flavors, some preservatives (BHA, BHT, TBHQ), and aspartame can cause hyperactivity in children. This hyperactivity is known today as attention deficit hyperactivity disorder, ADHD. Subsequent studies by others, who tried to reproduce the findings of Dr. Feingold’s research team, failed to support the hypothesis. (The Feingold diet did show some improvements in other components of child behavior.) The Feingold diet showed benefits in open experiments, where the participants or those recording the results knew whether a patient received the active treatment or the control treatment. This diet, however, provided little or no benefit to children with ADHD in blinded clinical trials. In those studies, the participants or those recording the results did not know whether a patient received the active or the placebo treatment. Put another way, we can attribute most of the initially reported benefits of the Feingold diet to the placebo effect and the observer bias. Studies of children on the Feingold diet, where scientists added one of the food additives back to the diet, failed to show negative behavioral changes in response to the suspected chemicals. Nevertheless, more recent studies, where researchers added a cocktail of several food additives to the diet, did show that food additives can contribute to hyperactive behavior in children [62, 63]. These were rigorous studies known as randomized controlled trials, and many of them were double-blind randomized controlled trials. The latter are “the golden standard” of clinical research.

For example, one study shows that a cocktail of several food additives, when ingested at realistic doses, contributes to inattentive/hyperactive behavior in children [63]. They tested two different cocktails: one contained approximately 70 mg daily of the mixture of sunset yellow, carmoisine, tartrazine, ponceau 4R, and sodium benzoate. The second cocktail contained approximately 110 mg daily of the mixture of sunset yellow, carmoisine, quinoline yellow, allura red AC, and sodium benzoate. The amounts of each cocktail were smaller for 3-year-olds. This was a double-blind randomized placebo-controlled study that included approximately 270 children total. The authors concluded that a mixture of food additives commonly present in food can increase symptoms of inattention/hyperactivity in children. The finding was statistically significant, but the magnitude of the negative effect of food additives was modest. The observed modest effect cannot account for the large difference in symptoms between healthy children and children with attention deficit hyperactivity disorder [63]. An earlier study by the same group of researchers reached similar conclusions [62]. In 2004, a review of studies of the effects of food additives on hyperactivity in children identified 8 high-quality, well-designed clinical trials. That review reached the same conclusions: a) food additives have a noticeable, statistically significant effect in that they increase hyperactivity/inattention in children; b) the magnitude of the effect is small and cannot explain the symptoms of children diagnosed with ADHD [64].

Recent studies of “elimination diets” or “a few foods diet” show that these diets can work as a treatment for attention deficit hyperactivity disorder [6567]. These diets exclude all food additives and consist of a small number of food products that rarely cause food allergies. (The Feingold diet eliminates some but not all food additives and imposes fewer restrictions on food products.) One study investigated effects of an elimination diet consisting of rice, turkey, lamb, vegetables, fruits, margarine, vegetable oil, tea, pear juice, and water [67]. Seventy percent of children with ADHD showed significant improvements on this diet. This magnitude of effectiveness is comparable to the effectiveness of ADHD drugs. The control group did not follow any particular diet and thus was a no-treatment control.

This and earlier similar clinical trials were “open” studies, where test subjects and people assessing the symptoms (parents and teachers) knew which children were in the control and the experimental groups. Therefore, there is a chance that the results of this study contain a bias. For example, if some parents or teachers opposed the widespread pharmacological treatment of ADHD, they could inadvertently make the effects of the elimination diet look better than they were. As we saw in an earlier section of this chapter, it is possible to eliminate this bias. One option is to design a control diet and to conduct a blinded clinical trial where participants do not know which diet is supposed to have therapeutic effects. Another option is comparison of the elimination diet with a drug known to be better than a placebo in ADHD patients. Open trials of diets that Benjamin Feingold conducted four decades earlier showed promising results, but later, blinded clinical trials by other investigators failed to confirm those results. Nonetheless, in principle, the results from clinical trials of elimination diets are in agreement with the recent rigorous studies of mixtures of food additives that we reviewed in the previous paragraph.

There are some studies that investigated possible benefits of elimination diets in autism, a brain disorder in the category of pervasive developmental disorders. The main symptom of autism is impaired social interaction, among other problems. A review of these reports concluded that the quality of these clinical trials was inadequate and further research is needed to obtain conclusive results [68].

Still other studies have investigated the effects of individual food additives on healthy humans and laboratory animals. Many of these chemicals are different from the additives that were the focus of Benjamin Feingold’s research. These other studies show that some food additives can impair learning, memory, alertness, or activity level. The list of the artificial ingredients that may have undesirable effects on mental state includes erythrosine, iron, leucine, magnesium sulfate, monosodium glutamate, nitrates, nitrites, propionic acid, propylene glycol, refined sugar (sucrose), and stannous chloride. Some of the studies below used the amounts of food additives that are comparable to the amounts that modern humans ingest with food. In this case, the evidence of negative effects on mental abilities of animals suggests that those additives can have undesirable effects on humans. Most of the studies discussed below used the amounts of a given food additive that far exceed the dose that a modern human being normally ingests with food. These studies fail to prove that food products containing those additives have undesirable effects on mental health. Yet these studies do show that, in principle, pure chemicals (artificial ingredients) can cause detrimental consequences when ingested in large amounts. In other words, this research shows that “unnatural” or unusual types of food, such as large amounts of a pure chemical, can have negative effects on functioning of the brain. (Note that many species of plants and mushrooms are poisonous to humans and thus also represent “unnatural” food.)

One study investigated the effects of erythrosine, a cherry-pink synthetic food coloring, on the behavior of rats [69]. Erythrosine affects metabolism of brain chemicals called neurotransmitters, which regulate attention, motivation, and activity level among other things. The study showed that erythrosine had no detectable effect on behavior of the rats at the (low) doses comparable to human intake of this dye with food. High doses of erythrosine did produce changes in the behavior of the rats. These changes resembled the effects of drugs that exacerbate symptoms of attention deficit hyperactivity disorder. The authors concluded that erythrosine may affect metabolism of several neurotransmitters in the brain. Another conclusion was that relevant (low) doses of erythrosine have no detectable effect on the activity level or other components of behavior relevant to ADHD.

Other studies investigated the effects of iron, which food companies often add to breakfast cereals or bread in order to “fortify” them. To be precise, iron is not a food additive but a nutritional supplement. In one experiment scientists injected adult rats daily with 2.5 mg/kg and higher doses of iron. The scientists then subjected the rats to a battery of tests assessing learning and memory [70]. All doses showed impaired learning starting on day 3 of the experiment. The lowest dose in this study exceeded the recommended daily allowance of iron by about 10-fold. It is unclear whether the results of the experiment are applicable to food products fortified with small amounts of iron. Note that a finding of an adverse effect at a large dose does not necessarily mean that a lower dose will have a similar effect of smaller magnitude. In biological systems, a smaller dose can have no effect at all, or it may have a different, even beneficial effect [71, 72]. Two other studies on rats investigated the effects of large doses of iron, about 40 times the recommended daily allowance for humans. The results showed that iron can impair some components of memory function, such as novel object recognition [73, 74].

Leucine is an amino acid used by the food industry as a flavor enhancer. Bodybuilders use it as a food supplement. One study showed that injection of large amounts of leucine can impair learning in young rats [75]. This negative effect is unlikely to be relevant to the amounts of leucine that can be present in food. Yet this effect may be relevant to some bodybuilders who gorge on leucine.

Magnesium sulfate is an acidity regulator and firming agent. The food industry also uses it as a coagulant to make tofu cheese. Large amounts of magnesium sulfate can act as a laxative and may cause diarrhea, and can also temporarily impair attention and working memory in humans [76]. A study on rats showed that large doses can impair learning [77]. These findings are unlikely to be relevant to the small amounts of this substance present in human food. As mentioned above, a finding of an adverse effect at a large dose does not necessarily mean that a lower dose will have a similar negative effect of smaller magnitude. The lower dose may have no effect or a different kind of effect. The readers that are interested in unusual effects of low doses can do some research on the term “hormesis” [71, 72].

Monosodium glutamate (MSG) is a flavor enhancer used by the food industry. Research shows that large doses of MSG (2,000 to 4,000 mg/kg) reduce activity level in mice [78] and can impair learning in young rats [79, 80]. These findings may not be relevant to the small amounts of MSG (well below 100 mg/kg) that humans can ingest with food.

To meat products the food industry often adds nitrates, such as sodium nitrate or potassium nitrate, as preservatives and color fixatives. Large doses of nitrates, which can serve as a treatment for angina pectoris, can cause headaches in susceptible people [81]. This observation is not in the category of effects on mental state, but it is worth mentioning. This negative effect may not be applicable to the small doses of nitrates present in food products. Sodium nitrite is a related food additive that the food industry uses as a preservative and color fixative. A large dose of sodium nitrite (55 mg/kg), which is unattainable with the amounts present in human food, can reduce activity level of rats [82]. A smaller dose of this substance (11 mg/kg) can impair learning in rabbits [83]. The latter dose is achievable in humans with daily consumption of large amounts of processed meats. Propionic acid is another preservative, and large doses of this substance impair mental performance in rats [84]. This negative effect may not be applicable to the doses normally present in food products.

Propylene glycol is a humectant and a solvent for other food additives and drugs. One clinical case report described a patient who had a history of recurrent treatment-resistant epileptic seizures [85]. After analyzing the blood work and other laboratory data, the authors concluded that the seizures were the result of propylene glycol poisoning from the patient’s favorite fruit drink. After the patient stopped consuming the suspected fruit drink, seizures never occurred again. This single observation in no way constitutes rigorous proof that small amounts of propylene glycol can cause seizures. The above patient most likely consumed excessive amounts of this substance, enough to change the concentration of total protein in blood. The authors stated that the shift in blood protein concentration in all likelihood was the main cause of seizures. There is an additional possible explanation. One study shows that propylene glycol can enhance the release of a brain chemical called dopamine from neurons [86]. At high doses, dopamine-releasing drugs such as amphetamine can cause seizures [87]. Thus, increased release of dopamine in the brain of that patient could also be responsible for the seizures.

Stannous chloride is a color retention agent and antioxidant. Large doses of this chemical can either stimulate or depress the central nervous system in laboratory animals, depending on the context [88]. It is unknown if the small doses of stannous chloride present in human food can have any psychotropic effects on humans.

One of the most famous and controversial food additives is refined sugar (sucrose). Early studies suggested that consumption of sucrose can contribute to “inappropriate behavior” in children [89, 90]. Some other studies even suggested that elimination of refined sugar from the diet of prisoners (replacement with fruit) can improve their behavior [91]. On the other hand, later studies showed that ingestion of large amounts of sucrose has no effect on the behavior of children and convicts [89, 92, 93].

Nevertheless, the role of refined sugar in the current epidemic of obesity is widely accepted and the scientific evidence is compelling. Experiments on laboratory animals show that sucrose can cause weight gain, increase the blood cholesterol level, and negatively affect learning and memory [94, 95]. For example, one study on rats shows that consumption of a sucrose solution can increase distractibility and impair the ability to get accustomed to repeated stimuli [96]. Another study shows that obesity-causing doses of sucrose can impair learning in rats [95]. One report shows that, in humans, a sucrose solution can cause sleepiness 30 to 60 minutes after ingestion without any other noticeable effects on mental abilities [97]. Another study on humans shows that a single dose of refined sugar has no effect on mood or attention but can improve some components of memory [98]. This beneficial effect disappears with repeated administration of sucrose. A recent review of high-quality studies concluded that refined sugar is unlikely to play a role in inappropriate or delinquent behavior of children [89]. Based on the available literature, we can conclude that the negative effects of sucrose for the most part affect physical health and the effects on mental abilities are small.

Table salt (sodium chloride) is another familiar food additive. To the best of my knowledge, there is no evidence that it has any detrimental effects on mental performance. Many of the diets described in this book allow adding small amounts of salt for taste. Nonetheless, salt has adverse effects on the cardiovascular system and studies suggest that cutting consumption of salt by half can reduce incidence of cardiovascular disease in the population.

Dietary supplements that consist of micronutrients such as vitamins and minerals are a special category of artificial ingredients that differ from food additives. The food industry adds food additives to food and this carries negative connotations in the minds of consumers: “MSG causes you to overeat,” or “sugar-sweetened soft drinks contribute to obesity.” In contrast, the vendors of dietary supplements sell them separately from food and advertise them as something that can improve your health and well-being. In the United States, there is a lot of misleading advertising associated with dietary supplements. They are not prescription drugs and the Food and Drug Administration does not regulate them as strictly. Regardless of the advertised beneficial effects on health, all dietary supplements must carry a disclaimer on the package that “This product is not intended to diagnose, treat, cure or prevent any disease.” On the QuackWatch website, readers can find an excellent overview of the deceptive tactics often employed by the vendors of dietary supplements.

Vitamins and minerals (micronutrients) are naturally present in sufficient amounts in many healthy types of food such as whole grains, fruits, vegetables, meat, fish, eggs, and dairy. In theory, vitamin and mineral supplements should be beneficial for people who subsist on a deficient, unbalanced diet. Health benefits of micronutrient supplementation are less clear for people who consume a balanced healthy diet. Long-term studies of the effects of multivitamin supplements in the U.S. failed to show benefits for health [99103]. These data suggest that people who consume a balanced diet do not need to take vitamin supplements. At least, these people cannot expect multivitamin pills to prevent cancer and cardiovascular disease [99]. Some studies even show that vitamin supplementation may have negative effects on health. Two studies show that β-carotene supplementation correlates with more frequent occurrence of lung cancer. Another study shows that folic acid supplementation is associated with an elevated risk of precancerous polyps [99, 104]. Other studies have identified higher mortality rates among users of antioxidant supplements compared to control groups [105, 992, 993].

According to the natural intelligence theory, artificial vitamins and minerals are “unnatural” ingredients that will worsen mental abilities.G I do not consume micronutrient supplements, and when I tried them in the past they did not have noticeable effects on mental state or mental performance. Contrary to my theory, however, some studies have shown that micronutrient supplementation can have beneficial effects on mental abilities and behavior of some segments of the population. There is an excellent review of this topic by David Benton [106].

In defense of my theory, I can say that these studies show that micronutrient supplementation is most effective for people who consume a deficient unbalanced diet. The benefits are less obvious for people who consume a balanced diet [106, 107, 938]. Even in the context of a balanced diet, thermal treatment of food (i.e. cooking) results in partial degradation of some vitamins (by 10 to 30%). Therefore, adding artificial vitamin supplements to a cooked diet may produce benefits. If, however, a person is on a raw balanced diet that is free of pathogens, then supplementation with vitamins will not provide any further benefit and may worsen mental abilities. This is because, in the context of the ancestral diet, all the necessary vitamins and minerals will be present, and supplementation with artificial micronutrients will be unnecessary and “unnatural.” To date there is no experimental evidence to either confirm or refute these suppositions and further studies are needed. Regarding more realistic lifestyles, not all studies of micronutrients show that they improve brain function. For example, a recent randomized controlled trial shows that micronutrient supplementation (vitamins and minerals) does not improve mental development and mental abilities of infants in Zambia [983]. Thus, we can conclude that micronutrient supplementation produces small or even undetectable improvement in mental abilities.

Most of the above studies examined effects of individual artificial ingredients or, in some cases, a mixture of a small number of artificial ingredients. A great number of different artificial ingredients are present simultaneously in a typical modern diet. Therefore, these ingredients may also produce novel unexpected effects on health due to interactions among the effects of individual food additives. In other words, just as prescription drugs can have adverse interactions when taken simultaneously, artificial chemicals in the diet may also produce adverse interactions. Studies of individual artificial ingredients cannot identify these additional effects of interaction. A good tool for studying the collective effects of all artificial ingredients is an elimination diet, or a so-called few foods diet [67]. Elimination diets consist of several essential types of food and exclude all food additives and dietary supplements. Therefore, an elimination diet can eliminate both individual effects of each food additive and interaction effects; this will produce detectable changes in mental state or mental abilities. Another useful research tool is the use of cocktails consisting of many food additives [63].

To sum up, artificial ingredients in the diet, such as food additives and dietary supplements, can have negative effects on health and mental abilities. Yet when scientists test a single food additive, the negative effects are small and often undetectable. Collective negative effects (interaction) of all artificial ingredients in the diet may be more significant. Further studies of elimination diets will shed light on this issue.

Key points:

Chemicals formed during the cooking of food and their effects on the brain

That cooking degrades vitamins to some extent is a well-known fact, and we will not discuss this topic at length. For many vitamins, degradation with cooking does not exceed 10 to 30% [108, 109]. Although this may have negative effects on health in theory (by causing a slight vitamin deficiency), in real life, people who consume a balanced, mostly cooked diet do not have vitamin deficiencies. As mentioned above, vitamin supplements do not provide long-term health benefits to people who consume a balanced diet [99], at least in industrialized countries. Some short-term studies show that micronutrient supplementation (vitamins and minerals) can slightly improve mental abilities in people who consume a balanced diet [106, 107]. Therefore, one can interpret these findings as showing that cooking reduces nutritional value of food and worsens mental abilities to some extent, and we can correct this by vitamin supplementation. Put another way, you could say that vitamin supplementation of a balanced (cooked) diet can restore the amount of vitamins to the level characteristic of an equivalent raw diet. This is supposed to restore mental performance to its “natural level.” Note that the reported beneficial effects of vitamin supplementation are tiny and often nonexistent. It is more interesting to explore the other side of the coin: cooking creates novel chemicals that are absent in raw food.

I did some literature research on chemical changes that occur in different types of food during thermal cooking. With respect to the effects on mental state, I was able to find data on some problematic chemicals. These chemicals form in muscle meats (such as beef or fish) and in cereal grains (such as wheat and barley) during cooking. Despite my best efforts, I was unable to find any such problematic chemicals that are formed in pasteurized milk or in fruits and vegetables cooked at moderate temperatures. One can interpret these data in two ways.

  1. Pasteurization of dairy products and cooking of fruits and vegetables do not produce chemicals that have negative effects on mental abilities. Judging by the effects on mental abilities and mental state, it makes no difference whether you consume these products raw or pasteurized/cooked. (Pasteurized milk is safer than raw milk.)
  2. Alternatively, the existing knowledge base (scientific literature) concerning chemical differences between raw and cooked foods is limited at present. Therefore, intelligence-worsening chemicals may be present in pasteurized dairy and in cooked fruits and vegetables. We do not yet know about the existence of these chemicals.

My personal experience suggests that the first interpretation is likely to be correct. Namely, pasteurization of dairy and cooking of fruits and vegetables at moderate temperatures have no effect on mental state or mental abilities.

On the other hand, my self-experimentation with raw meat and raw grains suggests that cooking these products causes noticeable changes in mental abilities, compared to a 100% raw diet. In other words, if we replace cooked meat and cooked grains with the corresponding raw products, both the subjective mental state and mental abilities should improve. The difference is detectable (subjectively) even after a single meal. You should not experiment with raw meat and fish because they pose serious risks to health (see Table 1). It is possible to improve mental abilities without resorting to such dangerous measures (and to achieve similar or even better results). Please don’t interpret the discussion that follows as encouragement to consume raw animal products. This type of food should become safe for the general population in the near future when the government approves new food sterilization technologies.B

With respect to cooked animal products, some studies have shown that, in humans, high-protein diets can lower mood [110, 111] and cause bad breath [112], fatigue [111, 113], and emotional tension [110, 114]. Some high-protein diets, when combined with a regimen of physical exercise, can worsen mood [110, 111]. There are studies suggesting that vegetarians tend to have better subjective ratings of mood than people consuming a mixed diet [115, 116]. Even a single high-protein meal consisting of cooked chicken and eggs can lower mood [117]. There is a report of a healthy person with a history of anxiety who had a relapse of panic attacks and anxiety symptoms after switching to a high-protein, low-carbohydrate diet [118]. Most people tolerate high-protein, low-carbohydrate diets well for extended periods of time [119], but these diets may still cause halitosis and fatigue [111113]. Consumption of foods that contain protein of highest quality, such as red meat, correlates with an elevated risk of cancer and cardiovascular disease [120123] according to epidemiological studies. The process of cooking forms some chemicals that are either absent or present at very low levels in raw animal products. These chemicals include heterocyclic aromatic amines, polycyclic aromatic hydrocarbons, nitropyrenes, cholesterol oxidation products, and creatinine [124137]. It is possible that these chemicals are the cause of some of the above-mentioned adverse effects on health. (Readers can skip the detailed discussion of this topic and jump to the key points: press the skip button or this link.) On the other hand, a recent study shows that a high-protein meat diet (3 weeks) does not cause deterioration of mental abilities [933]. This diet can improve reaction time [933].

As mentioned above, high-protein diets (which often include substantial amounts of red meat) can have adverse effects on health. These negative effects of animal products may in part be responsible for the popularity of vegetarian diets [115, 116]. If animal products (such as red meat) can have adverse effects on health, then the question arises whether humans by nature are vegetarians. Is human physiology incompatible with consumption of muscle meats? The answer is “no” because studies show that some species of primates in the wild are vegetarians (frugivores) but others are omnivores. The omnivorous primates often kill invertebrate and vertebrate species of animals for food [3941, 44]. The closest genetic relatives of humans, chimpanzees [138], consume meat, including red meat [3941]. The gastrointestinal tract of humans has a structure intermediate between that of carnivorous and vegetarian mammals. These data suggest that humans have adapted to consuming both plant and animal food [44, 45]. The notable difference between humans and primates living in the wild is that the former consume animal products that are predominantly cooked whereas the latter consume animal products that are raw [45].

Cooking with fire reduces the risk of infectious disease because it kills most bacteria and viruses [139]. It also introduces significant chemical modifications into food by denaturing proteins [140, 141], causing degradation of vitamins and lipids [142145], and creating novel chemical compounds [124137, 146, 147]. Studies show that cooking fish and meat at both high and moderate temperatures leads to the formation of a number of chemicals with mutagenic and carcinogenic properties [148150]. A number of studies have reported the presence of a mutagenic activity in cooked meat and fish [127, 129, 131137, 146, 151]. This mutagenic activity is undetectable in raw meat and fish [136, 146, 152, 153]. The mutagenic activity exists in cooked muscle meats only and is undetectable in cooked organ meats and in cooked plant and dairy products [152, 154]. The mutagenic chemicals are absent or below detection level in raw animal products but are present in cooked meat and fish. These compounds include heterocyclic aromatic amines [128, 129, 133, 134, 137, 146, 151, 152, 155159], polycyclic aromatic hydrocarbons [125, 131, 160162], and nitropyrenes [132, 135]. (Small amounts of polycyclic aromatic hydrocarbons may be present in raw products as a result of environmental pollution [163, 164].) Polycyclic aromatic hydrocarbons form in animal products during cooking on an open flame (e.g., barbequing). The amount of mutagenic compounds increases with the temperature and duration of cooking [136, 146, 165]. Animal products cooked at moderate temperatures (for example, by boiling or steaming) do contain mutagenic compounds [133, 136, 148, 157, 166, 167]. Yet the concentration of mutagens is much lower, sometimes undetectable, compared to high-temperature cooking procedures such as grilling or frying [146, 152]. Some of the mutagens detectable in cooked meat and fish have carcinogenic properties in laboratory animals [128, 134, 156, 160, 168]. Among other chemical changes during cooking, the concentration of cholesterol oxidation products can increase 5- to 10-fold [124, 126, 169]. The amount of creatinine can increase over 30-fold in cooked meat compared to uncooked meat [130].

In addition to the possible mutagenic/carcinogenic effects, some of the aforementioned chemical compounds may have other negative effects, such as effects on mental state. First we will review direct causal effects and, later, some association studies that cannot prove a causal link. For example, polycyclic aromatic hydrocarbons, such as pyrene and benzo[a]pyrene, form in animal products cooked at high temperatures (e.g., grilling and frying). These chemicals can induce behavioral depression in experimental animals at doses that exceed those found in human food [170173]. This effect may be due to changes in the level or metabolism of neurotransmitters in various regions of the brain [174, 175]. Benzo[a]pyrene has also impaired short-term memory and learning in rats in experiments where doses of this chemical exceed those that are present in human food [176, 177].

Elevated blood levels of cholesterol oxidation products (oxysterols) play a role in the pathogenesis of atherosclerosis [178, 179]. The latter is a pathological process leading to the clogging of blood vessels that underlies many cardiovascular diseases. In test tube experiments, which do not reflect what happens after consumption of cooked animal products, oxysterols can induce death of some types of cells in the central nervous system [180]. This should not be a cause for alarm, since oxysterols are present at low levels in the circulation of healthy people and serve several biological functions.

Research shows that a heterocyclic amine called norharman [155] reduces activity level in laboratory mice and causes pathological changes in the brain characteristic of neurodegenerative diseases [181]. The dose of norharman in this experiment was higher than what humans can receive with food. Another chemical from the same class, harman [129], which can also be present in cooked animal products, is toxic to some types of nerve cells [182]. Again, this should not be a cause for alarm, since both harman and norharman are present at low levels in the circulation of healthy people and serve several biological functions [183].

Other studies are less conclusive. Some of these studies have shown that chemicals formed by cooking may have something to do with abnormal mental functioning. But there is no proof that these compounds can cause such problems. For example, the blood level of creatinine increases after ingestion of cooked meat [130, 184] but remains unchanged after ingestion of uncooked meat [130]. Elevated blood levels of creatinine and poor clearance of creatinine by the kidneys correlate with fatigue [185187] and depressive symptoms in various groups of patients [188191]. It is unknown if this relationship is coincidental or causal.

Elevated blood levels of creatinine and oxysterols correlate with cognitive impairment in various groups of patients [192195]. Creatinine serves as an indicator of kidney dysfunction. Consequently, elevated blood levels of this chemical may point to accumulation of waste in blood due to insufficient kidney function. Thus, the reported cognitive impairment may be the result of kidney problems [196] rather than the elevated level of creatinine. Heterocyclic aromatic amines called harman and norharman [129] can cause toxicity for kidney cells [197, 198], and the former can also be toxic to nerve cells [182, 199, 200]. The blood level of harman correlates with symptoms of anxiety and depression in alcoholics [183]. It is not known if this relationship is coincidental or causal.

To summarize, there is evidence that cooked meat can lower mood but no evidence that it can impair mental abilities. It is possible that the effects on mood are due to heterocyclic amines, but hard evidence is lacking. The above studies suggest that, from the standpoint of metabolism, humans should tolerate uncooked animal products better than cooked ones. The major obstacle is various pathogens that can be present in raw animal foods (Table 1). Endnote B describes possible technical approaches that can ensure the safety of such foods in the future.

The mere mention of raw animal products and food in the same sentence may seem unusual and unpleasant. Yet consumption of uncooked animal products is not as exotic as it appears at first glance [201, 202]. Some types of sausage (such as salami and teewurst) are uncooked, that is, they undergo no thermal treatment [201, 203]. There are also national/regional dishes that consist of raw animal foods, such as sushi (Japan), hollandse nieuwe (Netherlands), and carne all’albese (Piemonte, Italy). Consumption of raw ground beef is not uncommon in Belgium and some other European countries [202]. An even more vivid example is the traditional diet of indigenous ethnic groups of the Arctic region (for example, North American Eskimos). These peoples consume a large proportion of fish and meat (caribou, seal, walrus, polar bear, and whale) raw, frozen or thawed [5558]. Meat and fish can constitute close to 100% of the Eskimo diet during the winter [58]. Nevertheless, at least in the United States and Russia, government regulators have not approved any raw animal products as safe for human consumption. Government regulators have neither banned nor approved foods such as sushi [52], whereas other food products such as raw dairy are either banned or face numerous restrictions on transportation and sales [204].

Going back to the effects of diets on metabolism, people should tolerate uncooked animal foods well if these foods are free of pathogens. This is because the raw diet is very old and pervasive in nature [45]. For example, animals in the wild do not cook their food [45]. Evolutionary predecessors as well as some early humans in all likelihood consumed a 100% raw food diet, prior to the mastery of cooking with fire approximately 300,000 years ago [42]. Some reports of successful control of fire, but not cooking, date as far back as 1.0 to 1.6 million years [205, 206, 1004]. Cooking with fire reduces the risk of infectious disease because it kills most pathogens [139], but it also produces chemical modifications in food, as we already discussed above. It is possible that during the last 300,000 years, the human immune system (at least in those living outside the Arctic region) has grown unaccustomed to the pathogens that occur in raw animal foods (Table 1). For this reason, these foods are not safe for human consumption. In addition, given humankind’s long history with cooking, Homo sapiens may have adapted to cooked food genetically through natural selection. Therefore, it is possible that humans need a certain percentage of cooked food in their diet for normal functioning.

Cooking has other benefits in addition to disinfection of food. For instance, cooked meat requires less energy to digest it than uncooked meat [207]. Cooking can also reduce the concentration of organic pollutants in animal products [208210]. Finally, the negative effects of cooked high-protein diets [110114] do not manifest themselves in people on cooked protein-normal diets. These data suggest that the liver detoxification system can neutralize the small amounts of novel chemicals that form in animal products during cooking.

It is now time to discuss chemicals formed by cooking of grains. These chemicals are absent in raw cereal grains. The chemical reaction between certain amino acids (components of proteins) and some types of sugars (components of carbohydrates, such as starch), when you mix and heat them, is called the Maillard reaction. Novel chemicals that form in the Maillard reaction are called Maillard reaction products. Cereal grains (e.g., wheat and rice) contain significant amounts of both protein and carbohydrates. Thus, cooking of grains leads to the formation of a number of different Maillard reaction products. The amount of Maillard reaction products increases with the temperature and duration of cooking. This means that boiled grains (moderate cooking temperature) contain smaller amounts of Maillard reaction products compared to bread (high cooking temperature) [211215].

Some of the Maillard reaction products that science identified in cooked cereal grains are the following: acrylamide, carboxymethyllysine, carboxyethyllysine, and fructosyllysine [216219]. Of these four, acrylamide is the only chemical with published evidence of direct negative effects on mental abilities [220]. With the other three, all studies are correlational, where a direct causal link is possible but not proven [221223].

A number of studies have shown that acrylamide, which is also a well-known industrial pollutant, is toxic to neurons. It can have several adverse neuropsychiatric effects in humans and laboratory animals. This chemical received a lot of attention several years ago. Swedish researchers announced that crispbread and French fries contain the levels of acrylamide that exceed 500-fold the maximal level allowed in drinking water by the World Health Organization [224, 225]. In contrast to bread, which undergoes cooking at high temperatures (300–400 degrees Celsius or 570–750°F), boiled grains contain undetectable levels of acrylamide [215, 225]. Boiled grains may contain Maillard reaction products other than acrylamide [212214]. Table 2 below summarizes the acrylamide content of various types of cereal grains. It is worth mentioning that nobody has proven the hypothetical carcinogenic effects of acrylamide in humans [220].

Table 2. Acrylamide content of various types of grains [211, 215, 225, 235]. “Undetectable” means below detection level of available analytical methods. N/A means data are not available. The neurotoxic dose of acrylamide is 200–500 micrograms per kilogram of body weight per day [220, 227].

Soaked raw grains or water extract of raw whole-grain flour

Acrylamide content: undetectable (< 5 mcg/kg)
Can this food deliver a neurotoxic dose? No
Effects* on mental abilities: increased alertness

Boiled whole grains

Acrylamide content: undetectable (< 5 mcg/kg)
Can this food deliver a neurotoxic dose? No
Effects* on mental abilities: slowing; reduced impulsivity

Breakfast cereals

Acrylamide content: ~60 mcg/kg
Can this food deliver a neurotoxic dose? No
Effects* on mental abilities: N/A

Whole-grain bread

Acrylamide content: ~50 mcg/kg
Can this food deliver a neurotoxic dose? No
Effects* on mental abilities: increased amount of errors, slowing, reduced impulsivity**

Toasted bread

Acrylamide content: ~200 mcg/kg
Can this food deliver a neurotoxic dose? No
Effects* on mental abilities: N/A

Crispbread***

Acrylamide content: 1000–2000 mcg/kg
Can this food deliver a neurotoxic dose? No
Effects* on mental abilities: N/A

* based on my personal observations; rigorous scientific proof is not available.
**after one week of a bread-and-water diet, I have not observed any negative effects on physical health, although one study on rats showed that addition of 5% to 25% of bread crust to the diet can cause weight gain and kidney damage [236].
***potato chips and French fries also contain large amounts of acrylamide; raw and boiled potatoes do not contain detectable levels of acrylamide [215, 225].

In people who work in chemical manufacturing, symptoms of acrylamide poisoning can manifest themselves within several months to several years. The typical symptoms include: numbness or tingling in the hands and feet, increased sweating of hands and feet, fatigue, muscle weakness, clumsiness of the hands, unsteady gait, dizziness, stumbling, and falling [226, 227]. This corresponds to a dose in the range 500 to 2,000 micrograms per kilogram of body weight per day [226228]. The effects of huge doses of acrylamide (10,000 mcg/kg b.w./day) administered over several days are loss of motor coordination, tremor, drowsiness, and mental confusion [229, 230]. The “no observable adverse effects level” (NOAEL) for neurotoxic effects of acrylamide is around 200 to 500 micrograms/kg b.w./day [220, 227]. High doses of acrylamide, 20,000 to 30,000 micrograms/kg b.w./d, impair cognitive functions in pigeons [231]. These high doses can also reduce activity level and cause a state of lethargy in rats [232, 233]. Long-term exposure to unsafe levels of acrylamide can damage peripheral nerves and can cause fertility problems in males [227, 234].

The key, of course, is the dose, and most of the studies above involve doses that far exceed the amounts of acrylamide that can be present in food (Table 2). Some types of food, such as crispbread, potato chips, and French fries, contain high levels of acrylamide. Yet these amounts cannot produce a dose higher than NOAEL for neurotoxic effects of this chemical. We can conclude that the presence of acrylamide in some foods should not be a cause for concern. It is unknown if the low doses of acrylamide present in food can affect mental abilities such as fluid intelligence. Neurophysiological effects of Maillard reaction products other than acrylamide, which are present in cooked grains, are unknown. Therefore, at present, there is no evidence that cooked grains can worsen mental abilities.

My personal experience, which does not count as rigorous scientific evidence, suggests that cooked grains can cause a noticeable slowing and have a mild sedative effect (Table 2). This is not the case with raw grains. Cooked grains can cause a small weight gain, whereas raw grains do not. Whether this is due to the presence of Maillard reaction products in cooked grains or to something else is unknown. Regular whole-grain bread contains a safe concentration of acrylamide, according to the occupational safety standards (Table 2). Nonetheless, I avoid bread and any type of baked or fried food (but I never say “never” when it comes to any food). My self-experimentation suggests that bread is the most typical “dumb food.” It quickly impairs mental abilities when I eat it in large amounts daily; we will discuss this in detail in a later section of this chapter. Boiled grains have a mild sedative effect, but they do not impair mental abilities.

There are some other differences between raw grains and cooked grains. For example, raw wheat bran can accelerate the passage of food through the digestive tract, while cooked wheat bran does not have this effect [237]. One study showed that bread crust can cause a weight gain and kidney damage in rats. These negative effects were absent in control animals who ate equivalent amounts of raw wheat flour [236]. Some studies have shown that it is more difficult to digest the protein in cooked grains than in raw grains [238, 239]. The Maillard reaction damages proteins, which may reduce their nutritional value [240]. Heating can cause curdling of some soluble proteins (albumins) that are present in cereal grains, making these proteins insoluble and less accessible to digestion. On the other hand, some studies show that cooking has no effect on digestibility of nutrients from grains, or may even improve it [241, 242]. Based on my experience, raw grains are not an appetizing food. Some of them you can soak in water, which makes them edible (wheat, oats), but they are still not tasty and can cause a lot of gas. I found that water extract of some grains has a pleasant taste and is more edible than soaked raw grains. We will talk about this in later sections. Some grains, such as sorghum, may not be safe to consume raw because they contain some toxic substances. Be careful with raw grains not mentioned above. Raw grains are safer than raw animal products. In rare cases raw grains may contain bacteria, yeast or yeast toxins [242244], and you need to verify the safety of a product in question with the manufacturer. The same is true of any raw food, including raw fruits and vegetables, which may contain pathogenic bacteria. Therefore, you should always wash fruits and vegetables before consumption.

In conclusion, this section shows that some types of cooking of cereal grains and of animal products produce chemicals that have adverse effects on health and on mental abilities. The amounts, however, of these undesirable chemicals are negligible in foods cooked at moderate temperatures (for example, boiling or steaming). Therefore, these types of foods are safe to consume and you should not worry about possible negative health effects. As mentioned earlier, biological systems such as the human body are complex and exhibit complex responses to different doses of the same chemical. The response to low doses of a chemical known to be toxic at high doses is difficult to predict. Paradoxically, research shows that low doses of toxic chemicals can often have beneficial effects on laboratory animals, the phenomenon known as hormesis (not the same as homeopathy) [71, 72]. Therefore, you shouldn’t be scared of the words “mutagen,” “carcinogen,” and “neurotoxic chemical” that came up in the discussion above. Animal products and grains cooked at moderate temperatures contain tiny amounts of these compounds and, at this dose, these chemicals may be beneficial for health, according to hormesis research.

On the other hand, it is best to avoid meat and grains cooked at high temperatures (frying, grilling, barbequing, microwaving, baking, and broiling). This is because the former contains significant amounts of carcinogens while the latter contain significant amounts of acrylamide and other Maillard reaction products. The small (safe) amounts of these chemicals in foods cooked at moderate temperatures may be responsible for the sedative effect of boiled grains and for the mood-lowering effect of cooked meat. These effects should not be a cause for alarm and are often useful, as you will see in Chapters Four and Five.

Key points:

end of sample
→ ~ ~ ~ ←