How Many Times does "Mitochondrial Eve" have to Die?
||Bert Thompson, Ph.D.
Brad Harrub, Ph.D.
Just when evolutionists thought it couldn’t possibly get any worse—it has! Toward the end of 2002, we authored an article for the Apologetics Press Web site titled “The Demise of ‘Mitochondrial Eve.’ ” In that article, we noted how rapidly things can, and often do, change in science. As an example of that fact, we discussed the well-known evolutionary icon, “Mitochondrial Eve,” a woman who was alleged to have lived in Africa at the beginning of the Upper Pleistocene period (between 100,000 and 200,000 years ago). Eve had been described as the most-recent common ancestor of all humans on Earth today. In fact, in mid-2002, some evolutionists still were touting her as exactly that—in spite of overwhelming scientific evidence to the contrary. Geneticist Spencer Wells, in his book, The Journey of Man: A Genetic Odyssey, referred to Eve as “a real person who lived at that time—the common ancestor of everyone alive today” (2002, p. 54). Spencer went on to inform his readers: “Crucially, though, the fact that a single ancestor gave rise to all of the diversity present today does not mean that this was the only person alive at the time—only that the descendant lineages of the other people alive at the same time died out” (p. 32).
This makes a great “just-so” story. But is any of it true? As we pointed out in our article on “The Demise of ‘Mitochondrial Eve,’ ” no, it’s not. The scientists who performed the original work that led to the creation of Eve (see Cann, et al., 1987) used estimates of the frequency of mutations that occur in the DNA within a cell’s mitochondria, in an attempt to determine how far back in time our alleged “most-recent common ancestor” could be traced (an explanation of how this works follows below). In performing this work, the researchers assumed that all of the DNA in the cell’s mitochondria had been passed down generation by generation only by the female. Other evolutionists who performed similar studies continued to make that same assumption—until reports began appearing in 1999, documenting that mitochondrial DNA also can be (and often is) passed down generation to generation by the male. This information destroyed the basic assumption upon which “Mitochondrial Eve” had been built—and led to her “demise.”
But we’re getting ahead of ourselves. Before we answer whether or not there is any truth to the type of “just-so” scenarios like the one posed by Spencer Wells, a brief history lesson might be appropriate.
On the first day of 1987, a new “discovery” seized the attention of the popular press. The original scientific article that caused all the commotion—“Mitochondrial DNA and Human Evolution”—appeared in the January 1, 1987 issue of Nature, and was authored by Rebecca Cann, Mark Stoneking, and Allan C. Wilson (see Cann, et al., 1987). These three scientists announced that they had “proven” that all modern human beings can trace their ancestry back to a single woman who lived 200,000 years ago in Africa. This one woman was nicknamed “Eve” (a.k.a., “Mitochondrial Eve”)—much to the media’s delight. An article in the January 26, 1987 issue of Time magazine bore the headline, “Everyone’s Genealogical Mother: Biologists Speculate that ‘Eve’ Lived in Sub-Saharan Africa” (Lemonick, 1987). A year later, that “speculation” became a major Newsweek production titled “The Search for Adam and Eve” (Tierney, et al., 1988). The provocative front cover presented a snake, tree, and a nude African couple in a “Garden of Eden”-type setting. The biblical-story imagery was reinforced by showing the woman offering a piece of fruit to the man.
A word of explanation is in order at this point. For decades, evolutionists had been trying to determine the specific geographical origin of humans—whether we all came from one specific locale, or whether there were many small pockets of people placed around the globe. When they set out to determine the specific geographical origin of humans, a curious piece of data came to light. As they considered various human populations, Africans seemed to show much more genetic variation than non-Africans (i.e., Asians, Europeans, Native Americans, Pacific Islanders, et al.). According to molecular biologists, this increased variability is the result of African populations being older, thus, having had more time to accumulate mutations and diverge from one another. This assumption led some researchers to postulate that Africa was the ancient “cradle of civilization” from which all of humanity had emerged.
The genetic material (DNA) in a cell’s nucleus controls the functions of the cell, bringing in nutrients from the body and making hormones, proteins, and other chemicals. Outside the nucleus is an area known as the cytoplasmic matrix (generally referred to simply as the cytoplasm), which contains, among other things, tiny bean-shaped organelles known as mitochondria. These often are described as the “powerhouses” or “energy factories” of the cell.
Mitochondria contain their own DNA, which they use to make certain proteins; the DNA in the nucleus oversees production of the rest of the proteins necessary for life and its functions. However, mitochondrial DNA (mtDNA) was thought to be special for two reasons. First, it is short and relatively simple (in comparison to the DNA found within the nucleus), containing only thirty-seven genes (instead of the 70,000+ genes located in the nuclear DNA). This makes it relatively easy to analyze. Second, unlike nuclear DNA, which each person inherits in equal portions from both parents, mitochondrial DNA was thought to be passed on only through the mother’s line (more about this later). Working from the assumption that mtDNA is passed to the progeny only by the mother, Dr. Cann and her coworkers believed that each new cell should contain copies of only the egg’s mitochondria. In trying to draw the human family tree, therefore, researchers took a special interest in these minute strands of the genetic code. What they really were interested in, of course, was the variations in mitochondrial DNA from one group of people to another.
Although our mtDNA should be, in theory at least, the same as our mother’s mtDNA, small changes (known as mutations) in the genetic code can, and do, arise. On rare occasions, mutations are serious enough to do harm. More frequently, however, the mutations have no effect on the proper functioning of either the DNA or the mitochondria. In such cases, the mutational changes will be preserved and carried on to succeeding generations.
Theoretically, if scientists could look farther and farther into the past, they would find that the number of women who contributed the modern varieties of mitochondrial DNA gets less and less until, finally, we arrive at one “original” mother. She, then, would be the only woman out of all the women living in her day to have a daughter in every generation till the present. Coming forward in time, we would see that the mtDNA varieties found within her female contemporaries were gradually eliminated as their daughters did not have children, had only sons, or had daughters who did not have daughters. This does not mean, of course, that we would look like this alleged ancestral mother; rather, it means only that we would have gotten our mitochondrial DNA from her.
To find this woman, researchers compared the different varieties of mtDNA in the human family. Since mtDNA occurs in fairly small quantities, and since the researchers wanted as large a sample as possible from each person, they decided to use human placentas as their source of the mtDNA. So, Rebecca Cann and her colleagues selected 145 pregnant women and two cell lines representing the five major geographic regions: 20 Africans, 34 Asians, 46 Caucasians, 21 aboriginal Australians, and 26 aboriginal New Guineans (Cann, et al., 1987, 325:32). All placentas from the first three groups came from babies born in American hospitals. Only two of the 20 Africans were born in Africa.
After analyzing a portion of the mtDNA in the cells of each placenta, they found that the differences “grouped” the samples by region. In other words, Asians were more like each other than they were like Europeans, people from New Guinea were more like each other than they were like people from Australia, and so on.
Family tree of recent human evolution as proposed by Cann, et al. (1987).
Next, they saw two major branches form in their computer-generated tree of recent human evolution. Seven African individuals formed one distinct branch, which started lower on the trunk than the other four groups. This was because the differences among these individuals were much greater than the differences between other individuals and other groups. More differences mean more mutations, and hence more time to accumulate those changes. If the Africans have more differences, then their lineage must be older than all the others. The second major branch bore the non-African groups and, significantly, a scattering of the remaining thirteen Africans in the sample. To the researchers, the presence of Africans among non-Africans meant an African common ancestor for the non-African branches, which, likewise, meant an African common ancestor for both branches. The nickname “Eve” stuck to this hypothetical common ancestral mother, and later, then, fired the media’s imagination.
Having concluded that the African group was the oldest, Dr. Cann and her colleagues wanted to find out just how old the group might be. To do this, they used what is known as a “molecular clock” that, in this case, was based on mutations in the mtDNA. The rate at which the clock ticked was determined from the accumulation of changes over a given period of time. As we note below in our discussion of the so-called molecular clock, if the assumption was made that there was one mutation every 1,000 years, and if scientists found a difference of 10 mutations between us and our ancient hypothetical ancestor, they then could infer that that ancestor lived 10,000 years ago.
The researchers looked in two places for their figures. First, they compared mtDNA from humans with that from chimpanzees, and then used paleontology and additional molecular data to determine the age of the supposed common ancestor. This (and similar calculations on other species) revealed a mutation rate in the range of 2% to 4% per million years. Second, they compared the groups in their study that were close geographically, and took the age of the common ancestor from estimated times of settlement as indicated by anthropology and archaeology. Again, 2% to 4% every million years seemed reasonable to them.
Cann, et al., suggested that the common mitochondrial ancestor diverged from all others by an average of 0.57% (325:34), which meant that she must have lived sometime between approximately 140,000 (0.57 ÷ 4 × 1,000,000) and 290,000 (0.57 ÷ 2 × 1,000,000) years ago. The figure of 200,000 was chosen as a suitable round number.
The results obtained from analysis of mitochondrial DNA eventually led to what is known in evolutionary circles as the “Out of Africa” theory. This is the idea that the descendants of Mitochondrial Eve were the only ones to colonize Africa and the rest of the world, supplanting all other hominid populations in the process. Many (though not all) evolutionists claim that such an interpretation is in accord with archaeological, paleontological, and other genetic data (see, for example, Stringer and Andrews, 1988; for an opposing viewpoint, see the written debate in the April 1992 issue of Scientific American).
While most evolutionists have accepted the mitochondrial DNA tree, they differ widely in their views regarding both the source of the nuclear DNA and the “humanity” of Eve. Some believe that Eve contributed all the nuclear DNA, in addition to the mitochondrial DNA. Some believe she was an “archaic” Homo sapiens, while others believe she was fully human. The exact interpretation is hotly debated because mitochondrial DNA is “something of a passenger in the genetic processes that led to the formation of new species: it therefore neither contributes to the formation of a new species nor reveals anything about what actually happened” (Lewin, 1987, 238:24). As Wells went on to observe:
When we sample people alive today, and examine their DNA to look for clues about their past, we are literally studying their genealogy—the history of their genes. As we have seen, people inherit their genes from their parents, so the study of genetic history is also a study of the history of the people carrying these genes. Ultimately, though, we hit a barrier when we trace back into the past beyond a few thousand generations—there is simply no more variation to tell us about these questions of very deep history. Once we reach this point, there is nothing more that human genetic variation can tell us about our ancestors. We all coalesce into a single genetic entity—“Adam” in the case of the Y-chromosome, “Eve” in the case of mtDNA—that existed for an unknowable period of time in the past. While this entity was a real person who lived at that time—the common ancestor of everyone alive today—we can’t use genetic methods to say very much about their ancestors. We can ask questions about how Adam and Eve relate to other species (are humans more closely related, as a species, to chimpanzees or sturgeons?), but we cannot say anything about what happened to the human lineage itself prior to the coalescence point (2002, p. 54, emp. in orig.).
The “reality” of Eve as the “most-recent common ancestor of all humans on Earth today,” however, depended upon two important “ifs.” If humans received mitochondrial DNA (mtDNA) only from their mothers, then researchers could “map” a family tree using that information. And, if the mutations affecting mtDNA had indeed occurred at constant rates, then the mtDNA could serve as a molecular clock for timing evolutionary events and reconstructing the evolutionary history of extant species. But, as we pointed out in our earlier article, “The Demise of ‘Mitochondrial Eve,’ ” it is the “ifs” in these two sentences where the current problem lies. The validity of these assertions is dependent upon two critically important assumptions: (1) that mtDNA is, in fact, derived exclusively from the mother; and (2) that the mutation rates associated with mtDNA have remained constant over time. The fact is (again, as we pointed out in the earlier article), we now know that both of these assumptions are wrong!
Ann Gibbons authored an article for the January 2, 1998 issue of Science titled “Calibrating the Mitochondrial Clock,” the subheading of which read as follows: “Mitochondrial DNA appears to mutate much faster than expected, prompting new DNA forensics procedures and raising troubling questions about the dating of evolutionary events.” In that article, she discussed new data which showed that the mutation rates used to obtain mitochondrial Eve’s age no longer could be considered valid.
Evolutionists have assumed that the clock is constant, ticking off mutations every 6000 to 12,000 years or so. But if the clock ticks faster or at different rates at different times, some of the spectacular results—such as dating our ancestors’ first journeys into Europe at about 40,000 years ago—may be in question (279:28).
Gibbons then quoted Neil Howell, a geneticist at the University of Texas Medical Branch in Galveston, who stated: “We’ve been treating this like a stopwatch, and I’m concerned that it’s as precise as a sun dial. I don’t mean to be inflammatory, but I’m concerned that we’re pushing this system more than we should” (279:28). Gibbons concluded:
Regardless of the cause, evolutionists are most concerned about the effect of a faster mutation rate. For example, researchers have calculated that “mitochondrial Eve”—the woman whose mtDNA was ancestral to that in all living people—lived 10,000 to 200,000 years ago in Africa. Using the new clock, she would be a mere 6,000 years old (1998, 279:29, emp. added).
“Mitochondrial Eve” a mere 6,000 years old—instead of 200,000?! Gibbons quickly went on to note, of course, that “no one thinks that’s the case,” (279:29). She ended her article by discussing the fact that many test results are (to use her exact word) “inconclusive,” and lamented that “for now, so are some of the evolutionary results gained by using the mtDNA clock” (279:29).
Which brings us to the point of this article. As we pointed out in our introductory sentence, the news gets worse. The “evolutionary results gained by using the mtDNA clock” are not just “inconclusive.” They’re wrong! In the January 2003 edition of the Annals of Human Genetics, geneticist Peter Forster of Cambridge published an article (“To Err is Human”) in which he documented that, to use his words, “more than half of the mtDNA sequencing studies ever published contain obvious errors.” He then asked: “Does it matter? Unfortunately, in many cases it does.” Then came the crushing blow for “Mitochondrial Eve”: “…fundamental research papers, such as those claiming a recent African origin for mankind (Cann, et al., 1987; Vigilant, et al., 1991)…have been criticized, and rejected due to the extent of primary data errors” (67:2, emp. added). Then, as if to add salt to an already open and bleeding wound, Dr. Forster acknowledged that the errors discovered thus far are “only the tip of the iceberg…,” and that “there is no reason to suppose that DNA sequencing errors are restricted to mtDNA” (67:2,3).
One month later, Nature weighed in with an exposé of its own. In the February 20, 2003 issue, Carina Dennis authored a commentary on Forster’s work titled “Error Reports Threaten to Unravel Databases of Mitochondrial DNA.” Dennis reiterated the findings that “more than half of all published studies of human mitochondrial DNA (mtDNA) sequences contain mistakes.” Then, after admitting that “published mtDNA sequences are popular tools for investigating the evolution and demography of human populations,” she lamented: “[T]he problem is far bigger than researchers had imagined. The mistakes may be so extensive that geneticists could be drawing incorrect conclusions to studies of human populations and evolution” (2003, 421:773, emp. added).
In her report, Dennis quoted Eric Shoubridge, a geneticist a McGill University’s Montreal Neurological Institute in Canada, who investigates human diseases that result from problems with mtDNA. His response was: “I was surprised by the number of errors. What concerns me most is that these errors could be compounded in the databases” (421:773). In 1981, the complete sequence of human mtDNA—known as the “Cambridge Reference Sequence”—was published in a database format for scientists to use in their research (see Anderson, et al., 1981). It is from that initial database that many of the mtDNA sequences have been taken and used to predict, among other things, the Neolithic origin of Europeans (Simoni, et al., 2000) and the “factuality” of the creature known as “Mitochondrial Eve.” Yet Dr. Forster has been busily engaged in making corrections to that 1981 database almost since its inception, and has compiled his own database of corrected mitochondrial sequences.
Eric Shoubridge (quoted above) isn’t the only one who is “concerned” about Peter Forster’s findings. Neil Howell, vice president for research at MitoKor, a San Diego-based biotech company that specializes in mitochondrial diseases, suggested that Forster’s error-detection method “may even underestimate the extent of the errors”(as quoted in Dennis, 421:773-774, emp. added).
What has been the response of the scientific community? Let Forster answer: “Antagonism would be an understatement in some cases” (as quoted in Dennis, 421:773). He did note, however, that, at times, some of the scientists whose published papers have been found to contain the errors were “forthcoming in resolving discrepancies in sequences.” That’s nice—since “truth” and “knowledge” are what science is supposedly all about (our English word “science” derives from the Latin scientia, meaning knowledge).
In the end, where does all of this leave “Mitochondrial Eve”? Could we put it any plainer than Dr. Forster did when he said that “fundamental research papers, such as those claiming a recent African origin for mankind have been criticized, and rejected due to the extent of primary data errors”? Criticized—and rejected?!
Once upon a time, “Mitochondrial Eve” was viewed as the evolutionary equivalent of “wonder woman.” Now, she has become the relative every family dreads—the one who arrives for “just a short visit,” and then somehow never leaves. Poor Eve. How many times, we wonder, will she have to die before she finally can be buried—permanently—and left to “rest in peace”?
Anderson, S., A.T. Bankier, B.G. Barrell M.H. de Bruijn, A.R. Coulson, et al. (1981), “Sequence and Organization of the Human Mitochondrial Genome,” Nature, 290:457-465, April 9.
Cann, Rebecca L., Mark Stoneking, and Allan C. Wilson (1987), “Mitochondrial DNA and Human Evolution,” Nature, 325:31-36, January 1.
Dennis, Carina (2003), “Error Reports Threaten to Unravel Databases of Mitochondrial DNA,” Nature, 421:773-774, February 20.
Forster, Peter (2003), “To Err is Human,” Annals of Human Genetics, 67:2-4, January.
Gibbons, Ann (1998), “Calibrating the Mitochondrial Clock,” Science, 279:28-29, January 2.
Lemonick, Michael D. (1987), “Everyone’s Genealogical Mother,” Time, p. 66, January 26.
Lewin, Roger (1987), “The Unmasking of Mitochondrial Eve,” Science, 238:24-26, October 2.
Simoni, L., F. Calafell, D. Pettener, J. Bertranpetit, and G. Barbujani (2000), “Geographic Patterns of mtDNA Diversity in Europe,” American Journal of Human Genetics, 66:262-278, January.
Stringer, C.B. and P. Andrews (1988), “Genetic and Fossil Evidence for the Origin of Modern Humans,” Science, 239:1263-1268, March 11.
Tierney, John, Lynda Wright, and Karen Springen (1988), “The Search for Adam and Eve,” Newsweek, pp. 46-52, January 11.
Vigilant, Linda, Mark Stoneking, Henry Harpending, Kristen Hawkes, and Allan C. Wilson (1991), “African Populations and the Evolution of Human Mitochondrial DNA,” Science, 253:1503-1507, September 27.
Wells, Spencer (2002), The Journey of Man: A Genetic Odyssey (Princeton, NJ: Princeton University Press).