Posts Tagged ‘science’

The Growing Conflict in America

Muslim Americans Living in a Secular Democracy and a Predominately Christian Country

 [A five-part series]

Part I

To be totally upfront with my cyberspace audience, it is important that you know where I stand politically. First, I am an ultra-liberal on civil rights and human rights. And, I am a card carrying member of Amnesty International. Second, when it comes to homeland security, national defense and the military, I am, by all measures, quite conservative. Part of the reason I suppose, is that I have a military background as a Vietnam combat war veteran. Nonetheless, as a social scientist I have a professional responsibility to present the facts that are data-driven, not bombastic rhetoric, political clichés, personal biases or hyperbole.

Every psychology book that talks about human needs (for example Abraham Maslow’s hierarchy of needs) describes the need to survive as our most important human need.

Right after Pearl Harbor, folks living in California, Oregon and Washington were very worried and anxious that Japan would invade the West Coast of the United States. Tremendous fear encompassed Americans throughout the nation. Reason and calm were in very short supply (as they are now) as Americans recoiled in the days and weeks that followed the attack on the U.S. Navy’s 7th fleet, army and air bases stationed in Hawaii.

Today we have a similar situation with Islamic Jihadist attacks; on 9-1-1 that killed 3000 people and injured scores of others; the Boston Marathon Jihadist attack that killed 4 people and injured many others; the November 2009 attack at Fort Hood that killed 13 soldiers and wounded 30 others; the July 15, 2015 attack by a Jihadist at a military recruiting facility and naval center killing four marines and one sailor in Chattanooga, Tennessee; the carnage that occurred with the death of 14 citizens and many more wounded, in San Bernardino, California; and, as recently as January 7, 2016, a professed jihadist tried to murder a Philadelphia police officer shooting the officer 11 times. Fortunately, the officer chased him, and then fired back wounding the assailant.

And, internationally, all of this was preceded in 2015 by Jihadist attacks in Paris, France that killed 130 people; Beirut Lebanon where 40 were killed and 200 others injured at a university; a hotel in Mali where 20 were killed; and the downing of a Russian passenger jet over the Sinai desert that murdered 224 passengers.

With all these attacks by radical jihadist Muslim extremists, fear has once again gripped the entire nation. But, so have anger and finally the willingness of a nation to put itself on a war-footing with our declared enemies, whether there is a formal declaration of war or not. If there was a formal declaration of war made by the United States Congress, the country would give the President the powers to engage the enemy with all its might, including strategic nuclear weapons.

Americans are not weaklings; Americans are tough, extremely resilient, persistent and strong-willed. As a nation we are protective of our people, our laws, institutions, and the supreme law of the land—the United States Constitution.

As Admiral Isoroku Yamamoto said just after the attack on Pearl Harbor, “all we have done is awake a sleeping giant and fill him with a great resolve.” Guess what folks, the sleeping giant is awake again and angry as hell over the onslaught of murders and barbaric acts committed by ISIS and Jihadists everywhere.

Human rights and civil rights are not protected under Islamic law. Consequently, the world has condemned the brutality, torture, rape, slavery of women, gratuitous cruelty, beheadings, incineration and drowning of prisoners carried out by Islamic jihadists and terrorists who have committed war crimes.

It is incumbent upon all the nations to collectively re-institute a war crimes tribunal like the Nuremberg Trials in Germany in 1945-46. As presidential candidate Hillary Clinton recently said, “She now believes the Islamic State group’s persecution of Christians, the Yazidi minority and other religious and ethnic minorities in the Middle East should be defined as ‘genocide.’ ”

This is the background for what is happening. The questions Americans want answered and the things they want done are evaluated and reviewed below. However, only facts can guide the way to fully understand just what is going on, and what could be or should be done about it.

Conflict of Values

Right now there appears to be a growing conflict in the United States between Muslims and non-Muslims. Consequently, there are a number of questions that need to be answered that relate to this conflict. Such questions include: (1) historically, why is there a conflict between Muslims and non-Muslims worldwide? How did we get to this day and age where radical jihadists want to dominate the entire world? (2) What does it means to be an American, and how well do Muslim Americans identify as Americans? (3) Do Muslim Americans promote, foster and support, albeit as a hidden agenda, the replacement of American laws and the United States Constitution with Sharia law that is intimately interwoven within the religion of Islam? (4) Has the Muslim Brotherhood infiltrated America with the goal of transforming the United States into an Islamic State, and (5) if so, what can be done about it? Each question above will correspond to each part of the five-part series.

Background of Religious Conflict

The conflict between Muslims and Christians is nothing new.  It dates back 1,400 years ago. The purpose of Part I is to give my cyber-space audience some historical perspective on the clash between Muslims and Christians.

The war against ISIS today gives the impression of a continuation of a religious war that is still unsettled, even after 1,400 years. There are approximately 2.2 billion Christians in the world today. By comparison, there are now approximately 1.6 billion Muslims in the world. Together, both religions comprise almost half the people on the planet. Christianity is more than 2,000 years old while Islam didn’t come into existence until the 7th Century A.D. What both religions have in common is that both possess moderates and extremists. While moderates can live in harmony, extremists cannot.

In Part I of this series I will describe the historical basis of the conflict between Muslims and Christians.

In Part II, I will compare the standard established in 1907 for American citizenship set by our 26th president, Theodore Roosevelt and see if Muslim Americans achieve that goal.

In Part III, I will address whether there is a plot underway whereby the Muslim world (here and abroad) is moving to replace American Laws (The United States Constitution and all federal, state and local laws) with a foreign set of Muslim religious laws known as Sharia Law. I personally don’t like conspiracy theories, but there is, unfortunately, some critical evidence to support this notion or idea that certain Muslim organizations have tried to do this.

In Part IV I will discuss whether the Muslim Brotherhood is trying to secretly infiltrate and replace the American government, our values of freedom and democracy, our legal system, our educational and cultural institutions, with an Islamic State that promotes only Islam and its religious-based legal system known as Sharia Law.

In Part V I will discuss what can be done about it, both on the Homefront and abroad.

All five parts comprise the nexus of concerns that non-Muslim Americans have today, including many moderate Muslim Americans as well. If America was ever taken over by the Islamic State, make no mistake about it—moderate Muslim Americans would be the first to die at the hand of jihadists.

Early History of Islam

The history of Islam concerns the political, economic, social, and cultural developments in the territories ruled by Muslims or otherwise substantially influenced by the religion of Islam.

Despite concerns about reliability of early sources, most historians believe that Islam originated in Mecca and Medina at the start of the 7th century. A century later, the Islamic empire extended from Iberia in the west to the Indus in the east.

Polities such as those ruled by the Umayyads (in the Middle East and later in Iberia), Abbasids, Fatimids, and Mamluks were among the most influential powers in the world. The Islamic civilization gave rise to many centers of culture and science and produced notable astronomers, mathematicians, doctors and philosophers during the Golden Age of Islam. Technology flourished; there was investment in economic infrastructure such as irrigation systems and canals; and the importance of reading the Qur’an produced a comparatively high level of literacy in the general populace.

In the 13th and 14th centuries, destructive Mongol invasions from the East, along with the loss of population in the Black Death, greatly weakened the traditional centers of the Islamic world, stretching from Persia to Egypt, but in the Early Modern period, the Ottomans, the Safavids, and the Mughals were able to create new world powers again.

During the modern era, most parts of the Muslim world fell under influence or direct control of European great powers. Their efforts to win independence and build modern nation states over the course of the last two centuries continue to reverberate to the present day.

Historical Impact of the Crusades

From a sociological and historical point of view, the Christian Crusades had both intended and unintended consequences that could be either positive or negative. On the positive side, by the 14th Century the Papacy, which was once powerful, had become fragmented. But, in many ways, the development of modern nation states was well on its way in France, England, Burgundy, Portugal, Castile, and Aragon partly as a result of the dominance of the church at the beginning of the Crusading Era.

There was also an expansion of trade throughout Europe as a result of the Crusades. This occurred because there was a need to raise, transport, and supply large armies. Roads that had been unused since the days of Rome saw significant increases in traffic as local merchants began to expand their horizons. Much Islamic culture and thought, such as science, medicine, and architecture was transferred to the west during the crusades. “This also aided the beginning of the Renaissance in Italy, as various Italian-city states from the very beginning had important and profitable trading colonies in the crusader states, both in the Holy Land and later in captured Byzantine territory.”

On the negative side, Muslims found the Crusades to be cruel and savage onslaughts by European Christians. “In the 21st century, some in the Arab world, such as the Arab independence movement and Pan-Islamism movement, continue to call Western involvement in the Middle East a ‘Crusade.’”

However, early Islamic and Muslim forces from the ancient world can’t claim that they didn’t invade and plunder other nation states. There really was justification for wanting to rid Islamic Muslim armies from territories they, in fact, had invaded. And much of the violence perpetrated against innocent Christian and non-Christian peoples of the ancient world was the result of such invasions by Islamic invaders.

A true account of world history shows that Islam repeatedly attacked Christian lands, desecrated sanctuaries and tortured Christians who fought back without desecrating Mecca in return.  Jerusalem changed hands many times over the centuries.

During the seventh century this was particularly tumultuous when pagan Persians stormed the city in 614 A.D. Later the Byzantine Emperor Heraclius led Eastern Christians to reclaim it by 630 A.D.

However, within a few years Islamic forces had broken the Byzantine military and chased them out of Palestine. Jerusalem surrendered to a Muslim army in 638 A.D., and construction began soon thereafter on a mosque at the Temple Mount.

Accordingly, “After capturing Jerusalem, the Muslim armies poured through the eastern and southern provinces of the reeling Byzantine Empire. In the 640s Armenia in the north and Egypt in the south fell to Islam. In 655 A.D. the Muslims won a naval battle with the Byzantines and very nearly captured the Byzantine emperor.”

In 711 A.D. Muslims controlled all of northern Africa, and in 712 A.D. Muslims had penetrated deep into Christian Spain. At the battle of Toledo they defeated the Spanish and killed their king. Spain promptly collapsed.

Attempts were made by Muslims in the Middle East to push further into the Byzantine Empire. In 717 A.D. they landed in Southeastern Europe, and they besieged the Byzantine Capital, Constantinople. “In 846 A.D. Muslim raiders attacked the outlying areas of Rome, the center of western civilization. This act would be comparable to Christians sacking Mecca or Medina, something they have never done.”

Near the end of the ninth century, Muslim pirate havens were established along the coast of Southern France and northern Italy. These pirates threatened commerce, communication, and pilgrim traffic for a hundred years or more.

During the tenth century, however, the tide began to turn. In the East in the 950s and 960s, the Byzantines mounted a series of counterattacks. They eventually recovered the islands of Crete and Cyprus and a good bit of the territory in Asia Minor and northern Syria, including Antioch. They lacked the strength to retake Jerusalem, though they might have struggled harder had they known the terrors the city would soon face.

In 1000 A.D. much or most of the Holy Land was still populated by Christians. However, a local Muslim leader named Hakim persecuted Christians and Jews. In 1009 A.D. he ordered the destruction of the rebuilt church of the Holy Sepulcher in Jerusalem. As a result, the Christian population began to shrink under Hakim’s tyrannical rule.

The Middle East was in for major changes that would change the balance of power of all the faiths. The major change was the invasion into the Middle East by the Seljuk Turks. These were pagan nomads who made steady inroads into the Muslim Arab world. In 1055 A.D. they invaded Baghdad and disrupted the stability of the Middle East.

The invasion of the Muslim Turks might well be thought of as the straw that broke the camels back as far as Western Christendom was concerned.

In 1071 A.D. Byzantine Emperor Diogenes confronted a Turkish invasion force in the far eastern provinces of the Byzantine Empire. The two armies met at the village of Manzikert, near Lake Van, and the Byzantines were utterly destroyed.  As a result of this disaster, the Byzantines lost all the territory that they had recovered, painstakingly, in the ninth and tenth centuries. This included the entirety of Asia Minor, the breadbasket and recruiting ground of the empire.

What followed was a response to Muslim and Arab invasions of Christian holy sites, lands and property in the Middle East. Succeeding Byzantine emperors sent frantic calls to the West for aid, directing them primarily at the popes, who were generally seen as protectors of Western Christendom. Pope Gregory VII received these appeals first, and in 1074 A.D. he discussed leading a relief expedition to Byzantium himself. But this proved impractical, and no aid was offered. The Byzantines continued sending appeals, however, eventually finding an audience with Pope Urban II.

The rest as they say is history. In 1095 A.D. the West responded to the plight of Eastern Christians by mounting the First Crusade. In 1099 A.D. crusaders stormed Jerusalem.

It wasn’t long before a series of Muslim rulers wanted to retake Christian Holy lands. These rulers included Zengi, Nur-al Din, and the famous Saladin. They fought to reunite parts of the Islamic Middle East. These leaders initiated a jihad, a counter-crusade against the Christians of Jerusalem and the surrounding regions. A desire to reconquer the city figured more and more notably into Muslim writings. “By the end of the twelfth century, Saladin had re-conquered Jerusalem more or less permanently. The entire Holy Land was back under Islamic control by 1291.”

While many take the perspective that the crusades were their darkest hour, others might say that to a large extent there was much justification given the centuries of persecution of Christians and Jews by the Muslim world. It all points to the never-ending futility of nations at war over land and the willingness of many peoples of the world to fight in the name of (and to kill for) God.

Here we are in the 21st century and a Muslim Jihadist Holy War still exists and is casting its ominous threats over the entire world, including America.

Current Status of Religions Worldwide

Worldwide, more than eight-in-ten people identify with a religious group. A comprehensive demographic study of more than 230 countries and territories conducted by the Pew Research Center’s Forum on Religion & Public Life estimates that there are 5.8 billion religiously affiliated adults and children around the globe, representing 84% of the 2010 world population of 6.9 billion.

The demographic study – based on analysis of more than 2,500 censuses, surveys and population registers – finds 2.2 billion Christians (32% of the world’s population), 1.6 billion Muslims (23%), 1 billion Hindus (15%), nearly 500 million Buddhists (7%) and 14 million Jews (0.2%) around the world as of 2010. In addition, more than 400 million people (6%) practice various folk or traditional religions, including African traditional religions, Chinese folk religions, Native American religions and Australian aboriginal religions. An estimated 58 million people – slightly less than 1% of the global population – belong to other religions, including the Baha’i faith, Jainism, Sikhism, Shintoism, Taoism, Tenrikyo, Wicca and Zoroastrianism, to mention just a few.

At the same time, the new study by the Pew Forum also finds that roughly one-in-six people around the globe (1.1 billion, or 16%) have no religious affiliation. This makes the unaffiliated the third-largest religious group worldwide, behind Christians and Muslims, and about equal in size to the world’s Catholic population. Surveys indicate that many of the unaffiliated hold some religious or spiritual beliefs (such as belief in God or a universal spirit) even though they do not identify with a particular faith.

A New Estimate of the U.S. Muslim Population

By Besheer  Mohamad

“Pew Research Center estimates that there were about 3.3 million Muslims of all ages living in the United States in 2015. This means that Muslims made up about 1% of the total U.S. population (about 322 million people in 2015), and we estimate that that share will double by 2050.

Our new estimate of Muslims and other faiths is based on a demographic projection that models growth in the American Muslim population since our 2011 estimate and includes both adults and children. The projection uses data on age, fertility, mortality, migration and religious switching drawn from multiple sources, including the 2011 survey of Muslim Americans.

According to our current estimate, there are fewer Muslims of all ages in the U.S. than there are Jews by religion (5.7 million) but more than there are Hindus (2.1 million) and many more than there are Sikhs.

In some cities Muslims comprise significantly more than 1% of the community. And even at the state level Muslims are not evenly distributed: Certain states, such as New Jersey, have two or three times as many Muslim adults per capita as the national average.

Recent political debates in the U.S. over Muslim immigration and related issues have prompted many to ask how many Muslims actually live in the United States. But coming up with an answer is not easy, in part because the U.S. Census Bureau does not ask questions about religion, meaning that there is no official government count of the U.S. Muslim population.

Since our first estimate of the size of the Muslim American population in 2007, we have seen a steady growth in both the number of Muslims in the U.S. and the percentage of the U.S. population that is Muslim.

In addition, our projections suggest the U.S. Muslim population will grow faster than the Hindu population, and much faster than the Jewish population in the coming decades. Indeed, even before 2040, Muslims are projected to become the second-largest religious group in the U.S., after Christians. By 2050, the American Muslim population is projected to reach 8.1 million people, or 2.1% of the total population.

Just over half of the projected growth of the American Muslim population from 2010 to 2015 is due to immigration. Over the last 20 years, there has been an increase in the number of Muslim Immigrants coming to the U.S. The number of Muslim immigrants currently represents about 10% of all legal immigrants arriving in the U.S., and a significantly smaller percentage of unauthorized immigrants.

The other main cause of Islam’s recent growth is natural increase. American Muslims tend to have more children than Americans of other religious faiths. Muslims also tend to be younger than the general public, so a larger share of Muslims will soon be at the point in their lives when people begin having children.

There has been little net change in the size of the American Muslim population in recent years due to conversion. About one-in-five American Muslim adults were raised in a different faith or none at all. At the same time, a similar number of people who were raised Muslim no longer identify with the faith. About as many Americans become Muslim as leave Islam.”

The Great Irony of Religious Wars

The great irony of all the bloodshed that has ever been spilled since the 7th century is that religions’ underpinnings, known as belief, promoted by endless “true Believers in Christianity and Islam,” may all be based on a false premise to begin with, i.e., that some supernatural entity (like the Christian God or Islam’s Allah) actually exists. Religious wars of course are not fought alone for anyone’s scriptures; war is more complicated than that.

Global and political reasons (stealing lands and plundering resources) underlie warring faction’s “real reasons” that lie at the heart of using religious belief as their justification. Nevertheless, people will resort to violence to get their own way and often use religion’s notion of faith in a God to justify their willingness to commit acts of violence and pursue the spoils of war.

All religions use faith as a substitute for reason; it is their justification for behavior including violence and harmful deviant acts. This is despite the fact they have the ultimate burden of proof of supernatural entities like a god.

The Burden of Proof

An agnostic may be defined as a person who believes that the existence of God, or a primal cause, can be neither proven nor unproven. The word agnostic comes from the Greek word meaning “unknown” or “unknowable.” The term agnostic needs to be contrasted with the term “Gnosis” or Gnostic where the later term means knowledge.

Another term used to refer to one’s position on God or primal causes is atheist. Atheists, as a group of nonbelievers, have certain disadvantages in their position taken. The first disadvantage is a verbal assertion about what is unknown, unknowable, supernatural, or invisible. That assertion is–that something does not exist. Such an assertion is patently “unscientific.”

By asserting that something does not exist one immediately clashes with what science has long held as its own limitation. That is, it is impossible to prove a negative hypothesis. Science doesn’t work that way, for what data would one collect (and data is the cornerstone of all science) in order to test one’s hypothesis that something does not exist? Put very simply–it is impossible to do that.

Ironically, to make an assertion about non-existence of a God is strikingly similar to the person who lives by faith that God does exist. Many people don’t realize it, but the religious zealot and the atheist share a common perspective, i.e., they are both trying to make a “leap of faith.” The believer and the nonbeliever share the same podium in that respect. Nevertheless, there is an important difference here that does favor the atheist over the theist or deist. And that difference is the burden of proof.

The burden of proof does not lie with the atheist or the scientist to prove something does not exist. Such proof technically lies with those who make claims of a supernatural nature; otherwise claims are only assertions of belief unsubstantiated and without the benefit of actual proof. What is different between the atheist and scientist on the one hand and the true believer or religious zealot on the other are their tools of measurement, willingness to measure, and the approach taken to such measurement.

Interestingly, the invisibility of the subject matter of religion or science isn’t even the issue. Why? Because even where invisibility of the subject matter is concerned, it is measurement, and a willingness to measure, that does matter. For example molecules, atoms, protons, electrons and even the elusive neutrino are invisible to the naked eye. Nevertheless, they can be measured for their proof of existence. God is alleged to be invisible to the naked eye, yet theologians and fundamentalist “true believers” of all types have yet to provide proof or a shred of evidence of existence through any kind of “measurement.”

Said simply, they have failed to provide the needed proof to substantiate their supernatural claims. It is interesting to note that the 20th century’s (greatest scientist) Albert Einstein never attributed bizarre supernatural forces as an explanation for the fundamental laws of the universe.

Why America is Turning More Secular

The data from the Barna survey done years ago strongly suggested that the slide toward syncretism may be responsible for the decline of Christianity in the 20th century. Evidently, the democratic trend toward freedom of religion and freedom from religion took heart in America. However, the net effect of these changes within and outside Christianity is the move toward a more secular society.

There are three basic reasons American society, in particular, is becoming more secular: (1) The religious right is trying to invade secular society, (2) scandals within the church have lowered its status in the eyes of the public, and (3) simultaneously, science education and technology have come to dominate the social landscape of our culture through laboratory research, and through educational programs on television and in the classroom.

It is also true that alienation produced by fundamentalists gone amuck with their disdain for liberal and mainstream Protestant denominations created an atmosphere where younger potential converts automatically looked askance at religious institutions altogether with contempt. Until mainstream and liberal churches gang up and fight fundamentalists politically and socially, Christianity will continue to lose adherents.

The same thing can be said about Islam, i.e., until moderates in the Islamic faith gang up on Islamic Jihadists extremists and do away with Sharia law, they will continue to lose potentially moderate adherents at home and abroad. Because of this loss they will suffer from the consequences of a right-wing extremist theocracy because of groups like ISIS and al-Qaeda.

 Below is a very poignant article found on Eurweb (electronic Urban Report) that was written by the free-lance writer and blogger—Trevor Brookins. Its title is “The Socialist’s Journal: Theology vs. Theocracy.” It gets to the heart of the differences between a theology and a theocracy. Trevor Brookins is from Rockland County, New York. He is currently working on a book about American culture during the Cold War and he maintains a blog called, This Seems Familiar.

“Theocracy is partly the source of the biggest problem today in that it is a perspective that produces religious fundamentalists. Contrarily theology is the biggest source of hope for ending conflict in the world. Ironically enough these two concepts are closely related and one grows out of the other.

In its most basic form theology is about understanding the nature of God and answering basic question about human existences, two of which tremendously influence our interactions with others. The question of ‘how ought we to behave?’ is the part of any religion that outlines ethics and there are many commonalities between faiths; the question of ‘where are we going’ address what happens after death and its answer contains fewer commonalities and therefore where the potential for conflict arises.

Given enough time a group of people will eventually make contact with another group of people who do not answer the afterlife question the way they do. When this contact is made these two groups can make the ethical question most important in which case they will attempt to live peacefully harmoniously alongside their new neighbors – this is the theological. Or the two groups can make the afterlife question most important in which case both groups perceive the other as heathen and attempt to eliminate the other religious perspective by converting their adversaries if not outright killing them – this is the theocratic response.

Historically we have documented many more cases of the second version of events following contact because of the wars that followed and the exchange of territory. But also of note is the correlation between religious wars and the institution of monarchy. Royal families that rely on hereditary rule and Divining Right to maintain their status are essentially claiming God wants them in charge. It is therefore an easy conclusion to reach that similarly God wants X so we do whatever it takes including war to attain/achieve X.

This path of logic has been used so frequently and with such success that is the reason behind every empire in Western civilization since Rome. And so convinced of this mindset are some in Western civilization that when a nation fails to achieve a goal or expand its territory an explanation offered is that the country must not be following God’s will.

However, in a world where monarchical rule has become obsolete in favor of democracy, the “God wants this” line of reasoning has also fallen out of favor. Religious fundamentalists ultimately are advocating turning back the clock and the adoption of God’s law as the operating principle for a country, but even within any given faith there is much debate on what God’s law is. Furthermore this theocratic perspective on life obscures the theological perspective that allows for groups to live peacefully that under theocracy might be at war.

Democracy can be said to be the opposite of theocracy and because of this it is impossible for a country to operate under both of these forms of government at the same time. On the other hand democracy and theology can coexist, and often do so to the benefit of both.

In the United States we obsess over Muslim fundamentalists and with good reason because there are many who seek to harm us. Equally dangerous though are the Christian fundamentalist principles that guide foreign policy. God wants Americans to have oil like God wanted Caucasians to expand across North America, that is to say not at all. God is being used to justify political and economic decisions.

Most people are moderate and used to making compromises. Even within our religious lives few of us follow all of the rules – ask the most devout Christian you know whether he/she really would not have a woman in leadership. What is at stake in that instance is simply another perspective on a topic. How much more then should we be willing to compromise when what is at stake are thousands of lives? Theocracy yields fundamentalism, conflict and death. Theology yields moderation, understanding and peace. Which will we choose?

As said earlier, in Part II, I will compare the standard established in 1907 for American citizenship set by our 26th president in order to see if Muslim Americans achieve that goal.

Read Full Post »

Our Human Origins—Part III

Evolution is the Real Story


In this the final segment of the series (Part III) I am going to provide a summarized view of human evolution, discuss the scientific relevance of the evidence for evolution, tell my cyberspace audience how they can go about learning their own origin going back 160,000-200,000 years ago, and, finally, discuss my ancestral journey out of Africa. As you read this material be informed that Ma refers to “Millions of Years Ago” and ka refers to “Thousands of Years Ago”.

Summarized Events of Human Evolution




15 Ma Hominidae  (great apes) speciate from the ancestors of the gibbon (lesser apes).
13 Ma Hominidae  ancestor’s speciate from the ancestors of the orangutan. Pierolapitheus catalaunicus is believed to be a common ancestor of humans and the great apes or at least a species that brings us closer to a common ancestor than any previous fossil discovery. It had special adaptations for tree climbing, just as humans and other great apes do: a wide, flat rib cage, a stiff lower spine, flexible wrists, and shoulder blades that lie along its back.

10   Ma

The lineage currently represented by humans and the Pan genus (chimpanzees and bonobos) speciates from the ancestors of the gorillas.

7   Ma

Sahelanthropus tchadensis Hominina speciate from the ancestors of the chimpanzees. Both chimpanzees and humans have a larynx that repositions during the first two years of life to a spot between the pharynx and the lungs, indicating that the common ancestors have this feature; a precondition for vocalized speech in humans. The latest common ancestor lived around the time of Sahelanthropus tchadensis, ca. 7 Ma; S. tchadensis is sometimes claimed to be the last common ancestor of humans and chimpanzees, but there is no way to establish this with any certainty.  The earliest known representative from the ancestral human line post-dating  the separation with the chimpanzee lines is Orrorin tugenensis (Millennium Man, Kenya; ca. 6 Ma).

4.4   Ma

Ardipithecus is a very early hominin genus (subfamily Homininae). Two   species are described in the literature: A. ramidus, which lived about 4.4 million years ago during the early Pliocene, and A.kadabba, dated to approximately 5.6 million years ago (late Miocene). A. ramidus had a small brain, measuring between 300 and 350 cm. This is about the same size as modern bonobo and female common chimpanzee brain, but much smaller than the brain of australopithecines like Lucy (~400 to 550 cm) and slightly over a fifth the size of the modern Homo sapiens brain.Ardipithecus was arboreal, meaning it lived largely in the forest where it competed with other forest animals for food, including the contemporary ancestor for the chimpanzees. Ardipithecus was probably bipedal as evidenced by its bowl shaped pelvis, the angle of its foramen magnum and its thinner wrist bones, though its feet were still adapted for grasping rather than walking for long distances.

3.6   Ma

Australopithecus afarensis Some Australopithecus afarensis left human-like footprints on volcanic ash in Laetoli, Kenya (Northern Tanzania) which provides strong evidence of full-time bipedalism. Australopithecus afarensis lived between 3.9 and 2.9 million years ago. It is thought that   A. afarensis was ancestral to both the genus Australopithecus and the genus Homo. Compared to the modern and extinct great apes, A. afarensis has reduced canines and molars, although they are still relatively larger than in modern humans. A. afarensis also has a relatively small brain size (~380–430 cm³) and a prognathic (i.e. projecting anteriorly) face.  Australopithecines have been found in savannah environments and probably increased its diet to include meat from scavenging opportunities. An analysis of the Australopithecus africanus lower vertebrae suggests that females had changes to support bipedalism even while pregnant.

3.5   Ma

Kenyanthropus platyops, a possible ancestor of Homo, emerges from the Australopithecus genus.

3   Ma

The   bipedal australopthecines (a genus of the Hominina subtribe) evolve in the savannas of Africa being hunted by Dinofelis. Loss of body hair takes place in the period 3-2 Ma, in parallel with the development of full bipedalism.




2.5   Ma

Homo habilis Appearance of Homo. Homo habilis is thought to be the ancestor of the lankier and more sophisticated Homo ergaster, lived side by side with Homo erectus until at least 1.44 Ma, making it highly unlikely that Homo erectus directly evolved out of Homo habilis. The first stone tools began appearing at the beginning of the Lower Paleolithic.Further information: Homo rudolfensis

1.8   Ma

A reconstruction of Homo erectus.Homo erectus evolves in Africa. Homo erectus would bear a striking resemblance to modern humans, but had a brain about 74 percent of the size of modern man. Its forehead is less sloping than that of Homo habilis, and the teeth are smaller. Other hominid designations such as Homo georgicus, Homo ergaster, Homo pekinensis, and Homo heidelbergensis are often put under the umbrella species name of Homo erectus.Starting  with Homo georgicus, found in what is now the Republic of Georgia  dated at 1.8 Ma, the pelvis and backbone grew more human-like and gave H. georgicus the ability to cover very long distances in order to follow herds of other animals. This is the oldest fossil of a hominid found outside of Africa. Control of fire by early humans is achieved 1.5 Ma by Homo ergaster.  Homo ergaster reaches a height of around 1.9 meters (6.2 ft.).Evolution of dark skin, which is linked to the loss of body hair in human ancestors, is complete by 1.2 Ma. Homo pekinensis first appears in Asia around 700 Ka but according to the theory of a recent African origin of modern humans, they could not be human ancestors, but rather, were just a cousin offshoot species from Homo ergaster. Homo heidelbergensis was a very large hominid that had a more advanced complement of cutting tools and may have hunted big game such as horses.

1.2   Ma

Homo antecessor may be a common ancestor of humans and Neanderthals. At present estimate, humans have approximately 20,000–25,000 genes and share 99% of their DNA with the now extinct Neanderthal and 95-99% of their DNA with their closest living evolutionary relative, the chimpanzees.The human variant of the FOXP2 gene (linked to the control of speech) has been found to be identical in Neanderthals. It can therefore be deduced that   Homo antecessor would also have had the human FOXP2 gene.
600 ka A reconstruction of Homo heidelbergensis Three 1.5 m (5 ft.) tall Homo heidelbergensis left footprints in powdery volcanic ash solidified in Italy.  Homo heidelbergensis may be a common ancestor of humans and Neanderthals. It is morphologically very similar to Homo erectus but Homo heidelbergensis had a larger brain-case, about 93% the size of that of Homo sapiens. The holotype of the species was tall, 1.8 m (6 ft.) and more muscular than modern humans.Beginning of the Middle Paleolithic.
338 ka Y-chromosomal Adam lived in Africa approximately 338,000 years ago, according to a recent study. He is the most recent common ancestor from whom all male human Y chromosomes are descended.

200   ka

Homo sapiens sapiensOmo1, Omo2 (Ethiopia, Omo river) are the earliest fossil evidence for anatomically modern Homo sapiens.

160   ka

Homo sapiens (Homo sapiens idaltu) in Ethiopia, Awash River, Herto village, practice mortuary rituals and butcher hippos. Potential earliest evidence of anatomical and behavioral modernity consistent with the continuity hypothesis including use of red ochre and fishing.

150   ka

Mitochondrial Eve is a woman who lived in East Africa. She is the most   recent female ancestor common to all mitochondrial lineages in humans alive today. Note that there is no evidence of any characteristic or genetic drift that significantly differentiated her from the contemporary social group she lived with at the time. Her ancestors were homo sapiens as were her contemporaries.

90   ka

Appearance of mitochondrial haplogroup L2.

70   ka

Behavioral Modernity occurs according to the “great leap forward” theory. This  involves the ability of modern humans to think more conceptually and  symbolically. The use of symbols as messages results in the creation of social   meaning. Such meaning derived is ascribed to social interaction with others. One person makes a gesture, and a second person reacts to that gesture. Then, from that exchange, there is a result that creates a new social reality. In  my opinion this is the beginning of cultural evolution right alongside  biological evolution. It is the start of a meaningful exchange of ideas and  learning.

60   ka

Appearance of mitochondrial haplogroups M and N, which participate in the migration out of Africa. Homo sapiens that leave Africa in this wave start interbreeding with the Neanderthals they encounter.

50   ka

Migration to South Asia. M168 mutation (carried by all non-African males). Beginning of the Upper Paleolithic. mt-haplogroups U,K.

40   ka

Migration to Australia and Europe (Cro-Magnon).

25   ka

The independent Neanderthal lineage dies out.   Y-Haplogroup R2; mt-haplogroups J,X.

12 ka

Beginning of the Mesolithic / Holocene. Y-Haplogroup R1a; mt-haplogroups V,T. Evolution of light skin in Europeans (SLC24A5). Homo floresiencis dies out, leaving Homo sapiens as the only living species of the genus Homo.

Relevance of Research Findings

    As the late Howard Cosell used to say, “Tell it…like…it…is!”

Let me preface this section by sayingmodern human beings, as a species, have a terrible ego problem that has been reinforced by culture for more than two thousand years. The ego problem itself is that mankind sees himself as superior to all other species on earth. This biased misreading of reality all came about because of religion and other aspects of culture.

Although all religions differ in their precepts, all peoples of the world prey on all other animal species, and they often prey on their own species. In the Christian religion, with its “man being made in the image of a god” or all the “having dominion over the animals”, this religion has influenced, both followers and non-followers alike, to accept the idea that mankind is ordained by God to often kill or at least dominate and control all other species on earth. These self-appointed privileges have inculcated in mankind a very inflated (control freak) ego.

All species, of course, have their own particular survival strengths and weaknesses; Mankind is no exception. We have a larger brain (and we even sometimes use it) but we are inferior to the Eagle in eyesight, we can’t fly like a bird, outrun lions, tigers or bears, oh no! Nor, can humans out swim a shark in the ocean, crocodiles in a jungle river, or win a wrestling match with a gorilla. Now that we have that straight, let’s discuss the relevance of evolutionary research.

So, what is the relevance of all the research findings on evolution since Charles Darwin?

First, all the research forms a convergence of the evidence that points to the conclusion that— mankind did in fact descend from the Great Apes (Gorillas, Chimpanzees and bonobos, and Orangutans). Hominids are the first early humans,

Of these our DNA is closest in proximity to the Chimpanzee—where 94% of our genes are identical. Approximately four million years ago, we branched off and became the hominid Australopithecus.

This convergence of the evidence is based on 150+ years of research evidence, data analysis, interpretation, logic, and reason. Add to this scenario the fact that the data on evolution is supported and expanded across 20+ different scientific disciplines. Coupled with this is all the knowledge that has been obtained on evolution going back to the Archean Eon (3.2 – 2.5 billion years ago) when Prokaryotes first evolved (in the oceans). Evolution is no longer regarded as a theory—it is fact.

Second, science in general has served as a catalyst for changing society, and it has succeeded in doing just that. Science in general, as well as paleontology in particular, is fact-based rather than based on superstitious nonsense or supernaturalism. People have benefitted by living in a world where society values truth (albeit still subject to an ongoing process of revision as new evidence is revealed). After all, there are no absolute truths; they don’t exist. What does exist is the fact that “Truth” is what we agree it is. And, informed truth through data and facts is what science is all about.

Thirdly, knowing where one comes from (how we got here) helps one better understand modern society and the relationships between and within different cultures. I posed an important question during Part I of this series. I hope you will go back to Part I and review it once again. I also refer you to Appendix A at the end of Part I to see the entire development of living organisms starting at approximately 3.2 billion years ago. Talk about relevance—what could be more relevant than that?

How to discover your own journey

This is simple. Go to www.genographic.com to order your kit online which has a quick start guide and consent form. Dr. Spencer Wells, the leading scientist of the Geno 2.0 project with National Geographic has said,” The greatest history book ever written is the one hidden in our DNA.” The project will collect DNA samples from your cheeks with enclosed swabs (Don’t worry, it’s painless.) You will then read and complete the consent form and mail it along with your cheek-swab samples in the envelope provided. Following this you can track your results and receive project updates by registering online at www.genographic.com. In about 6 weeks you will receive your detailed results.

You will receive the migration paths your ancient ancestors followed thousands of years ago, and learn the details of your ancestral makeup—your branches on the human family tree. Geno 2 will run a comprehensive analysis to identify thousands of genetic markers on your mitochondrial DNA, which is passed down each generation from mother to child, to reveal your direct maternal deep ancestry. In the case of men, they will also examine markers on the Y chromosome, which is passed down from father to son, to reveal one’s direct paternal deep ancestry.

In addition, for all participants, a collection of more than 130,000 other markers from across your entire genome will reveal the regional affiliations of your ancestry, offering insights into your ancestors who are not on a direct maternal or paternal line.

Included in these markers is a subject that scientists have recently determined to be from our hominid cousins. Neanderthals and the newly discovered Denisovans split from our lineage around 500,000 years ago. As modern humans were first migrating out of Africa more than 60,000 years ago, Neanderthals and Denisovans were still alive and well in Eurasia. It seems that our ancestors met, leaving a small genetic trace of these ancient relatives in our DNA.

My Journey out of Africa.

Specific Details   

Back in January, 2013 I received in the mail the Geno 2 kit. My wife gave me this gift during Christmas by ordering it on the Internet from National Geographic. I filled out the consent form and returned in the mail the two cheek swabs I had used to capture my DNA. The results of my Genographic Project test reveal information about my distant ancestors, including how and when they moved out of Africa and the various populations they interacted with over thousands of years of migration.

How did they do this?

They tracked markers—random, naturally occurring, changes in my DNA. The mutations act as a beacon and can be mapped over thousands of years (on the Y-chromosome for paternal lines and mitochondrial DNA for maternal lines). When geneticists identify such a marker, they try to figure out when it first occurred, and in which geographic region of the world.

In the report below, you will see the group with which I share genetic markers on my maternal and paternal sides. This is called your “haplogroup,” and is expressed in numbers and letters. Be advised as you read this material that much of it is likely to apply to you as well.

Maternal Line—Overview

U5a2a1d (my maternal haplogroup)

The maternal side of my story begins in East Africa around 180,000 years ago.  It was then that the direct maternal ancestor common to all women alive today was born. Of all the woman alive at the time, hers is the only line to survive into current generations.

From East Africa, my ancestors spread across Africa and eventually (between 60 and 70 thousand years ago) made their way to West Asia. It is quite possible that your family story begins this way as well.

My ancestors were content to stay in West Asia for about 44,000 years.  Then, following the food, climate, and opportunities, some of my descendants began to spread toward Europe, crossing through the Caucasus region. . Other members of this branch returned back toward Africa and the Levant region of the Arabian Peninsula. Your family story may continue with either group of travelers

Paternal Line—Overview

R-CTS4299 (my paternal haplogroup)

The paternal side of my story begins around 140,000 years ago, again in Africa. Although there were other human males alive at that time, only one male’s lineage is present in current generations.  Most men, including myself can trace their ancestry to one of this man’s descendants.

In a similar pattern to my maternal side, my ancestors traveled to West Asia where they lived by hunting wildlife and gathering wild fruits and berries. Eventually the paternal groups began spreading out west toward Europe and east to Central Asia.  Some returned to Africa and south into the Levant region.


The next section of this blog will describe the Maternal Line in more detail.  The Paternal line follows a similar pattern but involves many more groups known as markers.  Therefore, I will save the discussion on the paternal line for a later time.

Maternal Line—Specifics

My Haploid Branch, U5A, began its journey from the original lineage known as L.  This lineage started 180,000 years ago with “Mitochondrial Eve”.  Mitochondrial Eve is the name given to the woman from which all living people today originated.  Eve produced two lineal branches with different sets of genetic mutations.  These branches are known as L0 and L1’2’3’4’5’.  Eventually, L1’2’3’4’5’6 group mutated to a new group called L3.  This group was the first group to leave Africa.  The L3 group which left Africa gave rise to two new haplogroups that populated the rest of the world.

One of those new haplogroups, haplogroup N is present in my descendants. About 60,000 years ago, this group moved north across the Sinai Peninsula, into present day Egypt and north to the eastern Mediterranean regions and western Asia. Descendants of haplogroup N also comprise the most frequent lineage groups found in Europe.

Interestingly, excavations in Israel’s Kebara Cave discovered Neanderthal skeletons that were approximately 60,000 years old. My descendants probably interacted with the Neanderthal hominids explaining the presences of Neanderthal DNA in my DNA.

Before heading into eastern Mediterranean regions and western Asia, some members of haplogroup N mutated into a new haplogroup R. At about 50,000 years ago, this new haplogroup, R, explored new territories. Along with their contemporaries of group N, some returned to northern Africa while others went east into Turkey, Georgia and southern Russia. Still others headed further east into Central Asia. Your story may continue with either the R group or the original N group.  Both groups migrated throughout the world

Somewhere around 47000 years ago, a woman gave rise to the haplogroup U. The haplogroup U is a diverse group.  They have found this group in descendants from Europe, northern Africa, India, Arabia, the northern Caucasus Mountains, and through the Near East.

One of the haplogroup U headed north into Scandinavia.  This group, known as U5 is estimated to be around 30,000 years old.

Around 6,000 to 12,000 years ago, some members of the U5 group left Scandinavia for Western Europe, while others moved further north and east to South Asia. This branch known as U5A has its highest frequencies in Europe, Slovenia, Bulgaria, and Luxembourg.  It is this group that my ancestors probably came from.  However, this group also exists in the Lebanon, India and Ashkenazi Jewish populations.

Because of the complexities and variations of this haplogroup, research is continuing.

Summary of my results

After a detailed analysis of the markers in my paternal DNA and mitochondrial DNA and a comparison to the geographical presence of other human populations with similar markers, the Geno-2 project team concluded that I am 42 % Northern European, 38% Mediterranean and 19% Southwest Asian.

Their analysis explains that the 42% Northern European is a reflection of the fact that:

“This component of your ancestry is found at highest frequency in northern European populations—people from the UK, Denmark, Finland, Russia and Germany in our reference populations. While not limited to these groups, it is found at lower frequencies throughout the rest of Europe. This component is likely the signal of the earliest hunter-gatherer inhabitants of Europe, who were the last to make the transition to agriculture as it moved in from the Middle East during the Neolithic period around 8,000 years ago.”

The 38% Mediterranean is because:

“This component of your ancestry is found at highest frequencies in southern Europe and the Levant—people from Sardinia, Italy, Greece, Lebanon, Egypt and Tunisia in our reference populations. While not limited to these groups, it is found at lower frequencies throughout the rest of Europe, the Middle East, Central and South Asia. This component is likely the signal of the Neolithic population expansion from the Middle East, beginning around 8,000 years ago, likely from the western part of the Fertile Crescent. “

And the 19% Southwest Asian is due to:

“This component of your ancestry is found at highest frequencies in India and neighboring populations, including Tajikistan and Iran in our reference dataset. It is also found at lower frequencies in Europe and North Africa. As with the Mediterranean component, it was likely spread during the Neolithic expansion, perhaps from the eastern part of the Fertile Crescent. Individuals with heavy European influence in their ancestry will show traces of this because all Europeans have mixed with people from Southwest Asia over tens of thousands of years.”

What the Results Mean

The information below is taken directly from the report provided to me by the National Geographic Geno-2 project after their analysis of my DNA.

“Modern day indigenous populations around the world carry particular blends of these regions. We compared your DNA results to the reference populations we currently have in our database and estimated which of these were most similar to you in terms of the genetic markers you carry. This doesn’t necessarily mean that you belong to these groups or are directly from these regions, but that these groups were a similar genetic match and can be used as a guide to help determine why you have a certain result. Remember, this is a mixture of both recent (past six generations) and ancient patterns established over thousands of years, so you may see surprising regional percentages. Read each of the population descriptions below to better interpret your particular result.


My first reference population: German

This reference population is based on samples collected from people native to Germany. The dominant 46% Northern European component likely reflects the earliest settlers in Europe, hunter-gatherers who arrived there more than 35,000 years ago. The 36% Mediterranean and 17% Southwest Asian percentages probably arrived later, with the spread of agriculture from the Fertile Crescent in the Middle East over the past 10,000 years. As these early farmers moved into Europe, they spread their genetic patterns as well. Today, northern and central European populations retain links to both the earliest Europeans and these later migrants from the Middle East.”

Reference   Population My   Percentages
Northern   European 46 42
Mediterranean 36 38
Southwest   Asian 17 19

My second reference population: Greek

“This reference population is based on samples collected from the native population of Greece. The 54% Mediterranean and 17% Southwest Asian percentages reflect the strong influence of agriculturalists from the Fertile Crescent in the Middle East, who arrived here more than 8,000 years ago.

The 28% Northern European component likely comes from the pre-agricultural population of Europe—the earliest settlers, who arrived more than 35,000 years ago during the Upper Paleolithic period.

Today, this component predominates in northern European populations, while the Mediterranean component is more common in southern Europe.”

Reference   Population My   Percentages
Mediterranean 54 42
Northern   European 28 38
Southwest   Asian 17 19

More Information:

If you want more information the National Geographic Society suggests the following references on their project:

My hominid ancestry

“When our ancestors first migrated out of Africa around 60,000 years ago, they were not alone. At that time, at least two other species of hominid cousins walked the Eurasian landmass: Neanderthals and Denisovans. Most non-Africans are about 2% Neanderthal.”

The Denisovan component of your Geno 2.0 results is more experimental, as we are still working to determine the best way to assess the percentage Denisovan ancestry you carry. The evolution of this data is another way you are actively involved in helping advance knowledge of anthropological genetics!”

My DNA revealed that I am 1.3% Neanderthal.


Post Script

Evolution is the real story of our origins. I hope you decide to become involved with National Geographic’s Geno-2 project. As a child I was told that we mostly came from England and Wales. My father’s father was as a jeweler who helped create the Crown Jewels of England. In addition, I was told that the family had Jewish roots many generations earlier, and about a grandmother who spoke French. Being a many generation American, there is some history of being related to Cherokee Indians. My mother’s family had roots in Oklahoma and settlements from Gainesville, Ohio. This is my oral history passed down during the last six generations. Is this oral history true? I don’t know; some of it might have been embellished. Yet, in 1957 a close relative who worked for the Smithsonian Museum (He was a scientist and Curator of Ferns) in Washington D.C., researched our maternal family tree. Much of what I was told as a child was actually true.  No DNA analysis just good documented tracking. If you combine family oral history with the Geno-2 project you probably will possess an interesting picture of your ancestral past (distant and more recent).

Good luck in what you find out about your own journey out of Africa.

Read Full Post »

Our Human Origins—Part II

Evolution is the Real Story


The Evidence for Evolution Goes Into High Gear

Raymond Dart was one of the most significant figures in the history of human origins and evolution. In 1925, he reported in the journal Nature one of the most important findings in all of Paleoanthropology. He discovered the fossil skull of a child, half-ape, half-human. It was the first evidence of an early fossil link between apes and man. This became known as the now famous Taung child, named after the location where it was found. The location was in southern Africa.[1]

The Taung baby was a three year old Australopithecus africanus. This fossil had been found in a limestone quarry near Taung in Botswana and sent to Raymond Dart, at the University of Witwatersrand in 1924. He immediately recognized its great importance.

According to Richard E. Leakey in Origins, “Dart discovered  that the skull was relatively large compared with other known non-human primates; the teeth were more hominid-like than ape-like, as was the shape of the face. Judging by the estimated angle at which the head joined with the neck, the creature stood and moved in an upright posture, and he declared so in the scientific journal, Nature, at the beginning of 1925.”[2]

He named his Taung child in scientific nomenclature as Australopithecus africanus, the southern ape of Africa.

Dart described what he saw in great detail following the canons of a good researcher:

“Dart wasted no time in preparing the report for submission to Nature, in essence, it pointed out this while the skull, teeth, and jaw of the child had been humanoid, rather than anthropoid or apelike, this was undoubtedly a small-brained hominid, or member of the human family – the first of its kind to be described. He pointed out that the forward position of the foramen magnum, where the spinal cord attached to the skull, clearly indicated that this hominid had walked upright, with its hands free for the manipulation of tools and weapons in an open environment far to the south of the equatorial forests inhabited by chimpanzees and gorillas.”[3]

It’s fair to say that Dart’s findings were not immediately accepted by the scientific community. There were primarily two major objections to his Taung child discovery; one related to the Taung child itself, and the other to its location. The first was that juvenile apes tend to look more human-like than adult apes. This caused many to think that he had not found the missing link between man and ape. The second criticism was a kind of scientific prejudice in that it required further evidence before it was resolved. That is, the scientific community wanted to believe that the cradle of man’s origins must be in Europe or Asia, not Africa. Many years later the scientific community would accept and embrace the fossil findings of Raymond Dart.

The Leakey Family

One of the most famous scientists was part of a family of prolific productive scientists known as the Leakey’s. They comprised the famous Louis Leakey, Mary Leakey, Richard Leakey and Meave Leakey.

Louis Leakey

Louis Leakey had a tremendous impact on the study of human origins. Besides writing 20 books and 150 articles he found multiple fossils and stone tools. Due to his efforts he convinced other scientists that Africa was the key location in which to search for human origins. In essence he thought Africa was the cradle of humanity.[4]

Louis Semour Bazett Leakey was born on August 7, 1903 at Kabete Mission (near Nairobi Kenya). He was the son of English missionaries to the Kikuyu tribe. He graduated in 1926 with both degrees in archaeology and anthropology. After this, Leakey began leading expeditions to Olduvai, a river gorge in Tanzania where he found important fossils and Stone Age tools. In 1948 he reported finding a 20-million year old skull, which he named Proconsul africanus.[5]

The first hominid fossil attributed to Leakey was a robust skull with huge teeth dated to 1.75 million years ago. It was found by Leakey’s collaborators and second wife, Mary Douglas Leakey. It was found in deposits that also contained stone tools. He called the find Zinjanthropus boisei (it is now considered to be a form of Australopithecus). In 1964 Louis announced another important discovery–that of Homo habilis which Leakey believed was the first member of the actual human genus as well as the first toolmaker.[6]

Leakey brought world attention to his findings. But he also was influential in the emerging field of primatology. He was responsible for initiating Jane Goodall’s long-term field study of chimpanzees in the wild, and he helped obtain and coordinate funding for similar projects such as Dian Fossey’s work with mountain gorillas in Rwanda, and Birute Galdikas-Brindemour’s work with orangutans in the Sarawak region of Indonesia. Because of all he discovered and all his contributions to scientific research on human origins, Louis Leakey can certainly be called the “Michael Jordon” of human origins research. On route to a speaking engagement in London in 1972 he suffered a heart attack and died.[7]

Mary Leakey

Mary Leakey was Louis Leakey’s wife and collaborator, who made very important contributions to the field of paleoanthropology and research on human origins. Mary Douglas Nicol Leakey was born on February 6, 1913 in London, England.[8] She had a difficult childhood and when her father died in 1926, Mary was sent to a Catholic convent where she was repeatedly expelled. Her life soon changed for the better. After she saw the caves of the Dordogne she resolved to earn a degree in prehistory. She attended lectures at the University of London concerning archaeology and geology.

Mary met Louis when she was asked to illustrate a book, The Desert Fayoum written by Dr. Canton-Thompson.[9] Dr. Canton-Thompson acted as matchmaker to arrange for Mary to meet Louis Leakey while he was giving a talk at the Royal Anthropological Institute. Louis asked Mary to illustrate his book, Adam’s Ancestors.[10]  Their relationship grew thereafter.

“Mary and Louis spent from 1935 to 1959 at Olduvai Gorge in the Serengeti Plains of northern Tanzania where they worked to reconstruct many Stone Age cultures dating as far back as 100,000 years to two-million years ago. They documented stone tools from primitive stone-chopping instruments to multi-purpose hand axes.”[11]  “In 1947, on Rusinger Island Mary unearthed a Proconsul africanus skull which was the first skull of a fossil ape ever to be found. It was dated to be 20 million years old…In 1955 Mary and Louis was jointly awarded the Stopes Medal from the Geological Association for their hard work and discoveries.”[12]

In 1959 a 1.75 million year old Australopithecus boisei skull was found.[13]Not long afterwards, a less robust Homo habilis skull and bones of a hand were found. Both fossils were believed to be that of stone-tool peoples.[14] In 1965 Mary and Louis uncovered a Homo erectus cranium which was one million years old. After Louis Leakey died in 1972, Mary continued her work at Olduvai and Laetoli. She discovered Homo fossils at Laetoli which were found to be more than 3.75 million years old, fifteen species and one new genus.[15]

One famous find of Mary Leakey occurred between 1978 and 1981. This was the Laetoli hominid footprint trail which was left in volcanic ashes 3.6 million years ago.[16] The Laetoli footprints were significant. Laetoli is in Tanzania, which is located in eastern Africa. Mary started excavating the site at Laetoli in 1974. The work on the site continued for five years, when in 1978, she found three sets of fossilized footprints preserved in the ground.[17]

There were approximately seventy footprints found in two parallel trails about thirty meters long. After studying the footprints, Mary came to the conclusion that the prints were made by Australopithecus afarensis that had been walking bipedally. This raised questions about the evolution of bipedalism. Prior to this, it was believed that the first hominid to walk bipedally was Homo erectus.[18]

Although Mary Leakey was technically an archaeologist her work was mostly as a physical anthropologist. She was mostly known for the 1959 excavation of a two-million-year-old fossilized human skull. She also helped the world understand that the evolution of humans follows a principle rather than a theory. Mary Leakey died in 1996 at the age of 83.[19]


Richard Leakey

Richard Erskine Frere Leakey is the second of three sons of archaeologists Louis and Mary Leakey. He was born on December 19, 1944. Richard Leakey has the triple distinction of being a paleontologist, archaeologist, and conservationist. Richard Leakey started his career following in the footsteps of his famous parents with discoveries of early hominid fossils in East Africa. A Homo habilis skull (ER 1470) and a Homo erectus skull (ER 3733) were discovered respectively in 1972 and 1975.[20] These were among his most significant finds of Leakey’s earlier foundations.

In 1969, he and his wife Margaret had a daughter, Anna, and they were divorced in the same year. The following year he married Meave Epps, a zoologist who specialized in primates. They have two daughters, Louise in 1972, and Samira in 1974.

In the 1970s Richard’s fossil hunting continued. In 1984 his team found the most impressive fossil of his career. WT 15000, nicknamed The Turkana Boy, was a nearly complete skeleton of a Homo erectus boy.[21] The following year he made another major find, WT 17000, the first skull of the Australopithecus aethiopicus.[22] Richard Leakey and Roger Lewin described the experience of this find and their interpretation of it, in their book Origins Reconsidered (1992). Turkana Boy was estimated to be 1.6 million years old.[23]

In 1989 Richard Leakey was appointed the head of the Kenyan Wildlife Service (KWS) by President  Daniel Arap Moi in response to the international outcry over the poaching of elephants and the impact it was having on the wildlife of Kenya. With his appointment Leakey took steps to create well-armed anti-poaching units that were authorized to shoot poachers on sight. As a result the poaching menace was dramatically reduced. In 1989 Richard Leakey, along with President Arap Moi and the KWS made the international news headlines when a stockpile of 12 tons of ivory was burned.[24] In May 1995 Richard joined a group of Kenyan intellectuals in launching a new political party–the Safina Party.[25]

Two years earlier in 1993 Richard Leakey lost both his legs when his propeller-driven plane crashed. Sabotage was suspected, but never proven. In a few months Richard Leakey was walking again on artificial limbs. Richard Leakey wrote about his experiences with the KWS in his book Wildlife Wars: My Battle to Save Kenya’s Elephants (2001).[26] In recent years Richard, although interested in the field, has had little to do with paleontology. Most of his time was devoted to combating elephant and rhino poaching and overhauling Kenya’s troubled park system. Richard’s wife Meave Leakey continues to produce research findings in the field of paleontology.

Meave Leakey

Meave (Epps) Leakey was born in LondonEngland in 1942.[27] She was educated in convent and boarding schools. She later attended a technical college and university, where she developed a love of science. She obtained a B.S. (in Zoology and Marine Zoology) and Ph.D. (Zoology) from the University of North Wales.[28] In 1970, Meave and Richard Leakey were married. As mentioned earlier, they have two children: Louise in born 1972, and Samira, born in 1974. In addition to her fieldwork at Turkana, Meave Leakey’s research focused on the evolution of East African fossil mammals and mammalian faunas as documented in the Turkana basin.

Dr. Meave Leakey is the standard-bearer of a family of paleoanthropologists who have dominated their field for most of the 20th Century. In 1999 Dr. Meave Leakey made a discovery at Lake Turkana that completely redefined our understanding of early human ancestry. The research team found a 3.5 million-year-old skull and partial jaw (which she named Kenyanthropus platyops, or flat-faced man of Kenya.[29] They considered it to belong to a new branch of our early human family. This finding challenged the view that human beings descended from a single line of evolution.[30]

In 1989, Meave Leakey became the coordinator of the National Museum of Kenya’s paleontological field research in the Turkana Basin. Since her appointment Dr. Leakey has focused at the Turkana site in finding evidence of the very earliest human ancestors, concentrating on sites between 8 and 4 million years ago. In 1994, remains of some of the earliest hominids known were discovered at Kanapoi, southwest of the lake region. Not only do these finds represent a new species of Australopithecus anamensis (a likely ancestor of Australopithecus afarensis, the earliest hominid species previously recognized), but the dating of the find at 4 million years old has called for revising the accepted timeline for the evolution of hominids.[31] Dr. Meave Leakey has written more than fifty scientific articles and books.

Other Scientists

Donald C. Johanson

Dr. Donald C. Johanson, who earned his Ph.D. from the University of Chicago in 1974, is a sort of celebrity in the field of paleontology. He is currently professor of Anthropology and Director of the Institute of Human Origins at Arizona State University. He is best known for his discovery of “Lucy,” a 3.2 million-year-old Australopithecus afarensis skeleton he found in Ethiopia in 1974.[32] His books include Lucy: The Beginnings of Humankind and From Lucy to Language. Dr. Johanson hosted the Emmy-nominated Nova television series In Search of Human Origins. Johanson’s career, which spans more than 30 years, has led him to undertake field explorations in Ethiopia, Tanzania, Egypt, Jordon, Saudi Arabia, Yemen, Entrea, and most recently Iran.[33]

Johanson’s book, Lucy: The Beginnings of Humankind was winner of the 1981 American Book Award in Science. It chronicled his discovery of the 3.2 million-year-old Lucy skeleton and highlights its importance for comprehending who we are and where we come from.[34]

The expedition to find Lucy began in 1974. The findings took place in the badlands of the Afar Triangle, Middle Awash area near Hadar, Ethiopia in northeast Africa. During the Pliocene this area was dominated by lakes and woodlands. It was along the edge of these now-dried lakes among the former woodland that fossils can be located. Lucy was a 40%–complete skeleton. In an area known as AC-288-1 (Afar Locality # 288) Lucy was found in a sedimentary layer that eventually was dated at 3.5 million-years-ago.[35]

“Johanson, along with colleague Tom Gray, had been mapping another locality at the Afar site. Feeling ‘lucky,’ Johanson took a short detour into another area later mapped as locality 288 and ‘noticed something lying on the ground partway up the slope’ (Johanson, Edey, 1980). This ‘something’ turned out to be the exposed portion of a hominid arm bone.”[36]

Later Johanson and his team sectioned off the site to prepare for the collecting of the remaining bones. After three weeks of work, they had collected several hundred pieces of bone, which represented 40% of a single skeleton. The team knew these bones belonged to a single individual because there was no duplication of any one bone. During the night of November 30, 1974 there was much celebration and excitement over the discovery. There was drinking, dancing, and singing. The Beatles’ song ‘Lucy in the Sky with Diamonds’ was playing over and over.  The suggestion was made by Johanson that maybe they ought to call their find–Lucy.[37]

Dr. Johanson has asserted that Lucy was bipedal based on the shape of her pelvis and the angle the femur takes from the hip socket to the knee joint. “From her waist down she was hominid, and from her waist up she was still ape, as her skull was still the size of a chimpanzee. The discovery of Lucy proves that bipedalism predates big brain size.”[38]

What do we know about Lucy?

  • Other Australopithecus afarensis bones addressing the issue of sexual dimorphism suggest Lucy was female.
  • Lucy’s pelvic and femur structures, along with her knee joint confirms that she was decidedly hominid.
  • Lucy’s arms are also shorter than that of an ape, while maintaining a longer ratio of arm length to overall height than that of a modern human.
  • From her skeleton, comparing it to chimps and humans, and seeing that she is somewhere in between, she is on the way to becoming human.
  • Lucy’s dentition is a cross between ape and human in that the overall shape is apelike while the canine tooth size resembles that of modern humans.
  • In a chimp’s mandible there is a space between its incisors and large canine, which does not occur in Australopithecus afarensis females.
  • Lucy’s jaw was not as pronounced as that of the ape.
  • In addition to having characteristics halfway between ape and hominid, Lucy proves that bipedalism was prevalent long before large brain size.[39]

“It is generally accepted that hominids diverged from the apes between 6 and 5 million years ago. Australopithecus afarensis led to 2 further divergences, the robust Australopithecines that died out, and the gracile line leading eventually to us.”[40]

Recent Developments in Evolution Research

In 2005 some segments of American Society fought to suppress and dilute the teaching of even the most basic facts of evolution.  With this in mind Science Magazine decided to put Darwin in the spotlight by selecting several discoveries, each of which reveals the laws of evolution in action.

On December 23, 2005 Elizabeth Culotta and Elizabeth Pennisi wrote an article in Science Magazine.[41] The name of the article was: Breakthrough of the Year: Evolution in Action. In it the two authors describe the most significant studies and findings on evolution during 2005. “Equipped by genome data and field observations of organisms from microbes to mammals, biologists made huge strides toward understanding the mechanisms by which living creatures evolve.”[42]

Two areas of interest in the article were:

The Chimpanzee Genome



The Chimpanzee Genome

In September 2005 a dramatic result occurred when an international team published the genome of our closest relative, the chimpanzee. With the human genome defined, researchers could begin to line up chimp and human DNA and examine, one by one, the 40 million evolutionary events that separate them from us. The genome data confirmed our close kinship with chimps.[43]

“We differ by only about 1% in the nucleotide bases that can be aligned between the two species, and the average protein differs by less than two amino acids. But a surprisingly large chunk of noncoding material is either inserted or deleted in the chimp as compared to the human, bringing the total difference in DNA between our two species to about 4%.”[44]


The year 2005 was a standout for those researchers studying the emergence of new species, or speciation. As researchers know, a new species can form when populations of an existing species begin to adapt in different ways and eventually stop interbreeding.

This often occurs when populations are separated by geographical barriers like oceans or mountain ranges. But sometimes a single, continuous population simply splits in two. The prediction would be that the splitting begins when some individuals in a population stop mating with others. During 2005 field biologists recorded compelling examples of that process, often which results in rapid evolution of the organisms’ shape and behavior.[45]

The authors use the example of birds called European blackcaps sharing breeding grounds in southern Germany and Austria. Evidently these warblers migrate to northerly grounds in the winter rather than heading south. Isotopic data revealed that northerly migrants reach the common breeding grounds earlier and mate with one another before southerly migrants arrive. The upshot of this is that differences in timing may drive the two populations to become two species.[46]

Other Findings

Some of the most exciting developments in paleontology took place 2003-2005.

In June 2003 three new skulls from Herto, Ethiopia were discovered. These are the oldest known modern human fossils, at 160,000 years old.[47] The discoveries assigned them to a new species, Homo sapiens idalto. They claim they are anatomically and chronologically intermediate between older archaic humans and the more recent fully modern humans. Their age and anatomy is cited as strong evidence for the emergence of modern humans from Africa, and against the multiregional theory which argues that modern humans evolved in many places around the world.[48]

In March 2004 some fragmentary fossils discovered in Ethiopia were dated between 5.2 and 5.8 million years ago were originally assigned to a new subspecies, Ardipithecus ramidus kadabba. Following further study, the finders have decided that the differences between them and other fossils justify assigning them to a new species, Ardipithecus kadabba.[49] Also in March 2004 it was reported that the details of four new mtDNA sequences were retrieved from Neanderthal fossils. “This brings the number of known Neanderthal mtDNA sequences to eight, all of which are closely related, and considerably different from all modern human mtDNA sequences.”[50]

Probably the most significant finding in 2004 was the discovery of a gene by scientists at the University of Pennsylvania and the Children’s Hospital of Pennsylvania. It is the first scientific report that correlates human genetics directly with the fossil record of our hominid ancestors. They theorized that a jaw gene called key to evolution made room for a larger brain.[51]

Basically, a tiny genetic change in the muscles of pre-humans millions of years ago may have played a major role in endowing modern Homo sapiens with the larger brain and the capacity for thought, language and tool-making that distinguishes us from apes.

The novel theory was advanced by a team of biologists and surgeons, suggesting that a mutation in a single gene some 2.4 million years ago was largely responsible for a crucial change in the shape of our ancestors’ jaws and allowed for skulls with room for brains far larger than earlier members of the hominid line.[52]

The mutated gene is one of a class of codes for a protein called myosin, which is responsible for muscle contraction and determines the strength and size of the chewing and biting muscles of the jaws. In modern humans the mutated myosin gene, known as MYH16 – differs from the far older un-mutated gene found in many nonhuman primates, including the macaques and chimpanzees.[53]

The researchers contend that the mutated gene in effect disabled the large and powerful jaw muscles found in fossils of earlier large-skulled hominids and thereby launched a lineage of pre-humans with smaller jaws and larger skulls with plenty of room for bigger brains. This occurred 2.4 million years ago–just about the time when our own lineage split off from more primitive hominids known as the Australopithecines.[54]

Two skulls in 1967 were found near the Omo River in Ethiopia by Richard Leakey and thought to be 130,000 years old. However, it has been dated at 195,000 years, the oldest date known for a modern human skull. The Omo I skull is fully modern, while Omo II has some archaic features.[55]

In March 2005 a newly discovered partial skeleton from Mille in Ethiopia is claimed to be the world’s oldest bipedal hominid. “The fossil is about 4 million years old and has not yet been classified or published in the scientific literature, though it is said to fall between Ardipithecus ramidus and Australopithecus afarensis”.[56]

     In 2006, Tim White, now a professor at the University of California, Berkeley, announced the discovery of bones from at least 8 Australopithecus anamensis individuals dating to 4.1 million years ago in what had been a woodland environment in the Awash Valley of Ethiopia.  That’s more than a million years before Lucy, who was previously thought to be the earliest hominid skeleton. According to White, his team’s discovery is the “closest we have ever been able to come” to finding the missing-link common ancestor between humans and chimps.[57]

     In 2009, White and his team of researchers announced the discovery of a 4.4 million year old ape/transitional species named Ardipithecus ramidus that also lived in woodland environments of the Awash Valley.  White believes that this very early species was the direct ancestors of Australopithecus afarensis.[58]

     In 2010, Lee Berger of the University of the Witwatersrand in Johannesburg, South Africa announced his discovery of two partial skeletons of what may be a new australopithecine species that lived 1.977 million years ago in South Africa.  He named it Australopithecus sediba (“sediba” means “fountain” or “wellspring” in the Sesotho language of South Africa).  Berger and his colleagues suggest that this new species may be descended from Australopithecus africanus and could be one of the last links in the evolutionary line between the australopithecines and our genus Homo.[59]

In Part III of this series next month I will discuss the current relevance of our human origins and how each one of us can find out his own journey since humans began leaving Africa between 160,000-200,000 years ago.


[1] C.K. Brain, “Raymond Dart and Our African Origins,” [online]; accessed 18 Feb. 2005; available from http://www.press.uchicago.edu/misc/Chicago/284158-brain.html.

[2] Origins, 93,95

[3] C.K. Brain

[4] “The Leakey Foundation – Louis S. B. Leakey”, [online]; as accessed 31 March 2005 http://www.leakeyfoundation.org/foundation/fl-2.jsp.

[5] Ibid.

[6] Ibid.

[7] Ibid.

[8] “Mary Leakey,” [online]; accessed 1 Apr. 2005; available from  http://www.musu.edu/emuseum/information/biography/klmno/leakey-mary.html.

[9] Ibid.

[10] Ibid.

[11] Ibid.

[12] Ibid.

[13] Ibid.

[14] Ibid.

[15] Ibid.

[16] Ibid.

[17]Laetoli,” [online]; accessed 1 Apr. 2005;available from   http://www.mnsu.edu/emuseum/archaeology/sites/africa/laetoli.html.

[18] Ibid.

[19] Mary Leakey

[20] “Richard Leakey,” Wikipedia, the free encyclopedia [online]; accessed 25 Jan. 2006; available from, http://en.wikipedia.org/wiki/Richard-Leakey.

[21] “Biographies: Richard Leakey,” [online]; accessed 25 Jan. 2006;available from  http://www.talkorigins.org/faqs/hums/rLeakey.html.

[22] Ibid.

[23] Richard Leakey

[24] Ibid.

[25] Ibid.

[26] Ibid.

[27] “The Leakey Foundation – Meave Leakey,” [online]; accessed 25 Jan. 2006; available from  http://www.leakeyfoundation.org/foundation/fl-6.jsp.

[28] Ibid.

[29] Ibid.

[30] Ibid.

[31] Ibid.

[32]“Donald Johanson Web Page,” [online];accessed 25 Jan. 2006;available from  http://www.asu.edu/clos/iho/johanson.html.

[33] Ibid.

[34] Ibid.

[35] “Australopithecus afarensis (Lucy),” [online];accessed 27 Jan. 2006;available from  http://www.anthro4n6.net/lucy/.

[36] Ibid.

[37] Ibid.

[38] Ibid.

[39] Ibid.

[40] Ibid.

[41] Elizabeth Cullotta and Elizabeth Pennisi, “Breakthrough of the Year: Evolution in Action,” [online] (Science   23 December 2005: Vol 310. no.5756, pp 1878-1879;accessed 29 Dec. 2005;available from  http://www.sciencemap.org/cgi/content/full/310/5756/1878.

[42] Ibid.

[43] Ibid.

[44] Ibid.

[45] Ibid.

[46] Ibid.

[47] “Recent Developments in Paleoanthropology,” [online]; accessed 27 Jan. 2006;available from http://www.talkorigins.com/faqs/homs/recent.html.

[48] Ibid.

[49] Ibid.

[50] Ibid.

[51] David Perlman, Chronicle Science Editor, “Jaw gene mutation called key to evolution. It made room for larger brain researchers theorize,” [online]; accessed 17 May. 2005; available from http://www.sfgate.com/cgi-bin/article.cgi?file=/chronicle/archive/2004/03/25/MUTATION.T.

[52] Ibid.

[53] Ibid.

[54] Ibid.

[55] Recent Developments in Paleoanthropology

[56] Ibid.

[57] ime/specials/packages/article/0,28804,1972075_1972078_1972462,00.html#ixzz2XxFFPk20

[58] Ibid.

[59]f Berger, L. R.; de Ruiter, D. J.; Churchill, S. E.; Schmid, P.; Carlson, K. J.; Dirks, P. H. G. M.; Kibii, J. M. (2010). “Australopithecus sediba: a new species of Homo-like australopith from South Africa”. Science 328 (5975): 195–204. doi:10.1126/science.1184944. PMID 20378811.

Read Full Post »

Our Human Origins—Part I

Evolution is the Real Story



I am initiating a three-part series on Our Human Origins. Part I describes the background and history of how Charles Darwin came to his scientific conclusion that man descended from the Great Apes. I also describe the life of Charles Darwin in some detail. In addition, I describe the basics of the animal kingdom classification scheme of Linnaeus, and how humans fit in. In Part II I will report on the various major discoveries in paleontology since Darwin and the scientists who discovered them. In Part III I discuss the current relevance of our human origins and how each one of us can find out his own journey since humans began leaving Africa between 160,000-200,000 years ago.

I will share with my cyberspace audience how I was able to do that. In addition, I will pass on to readers how they can go find out what their journey was coming out of Africa. You will learn about the National Geographic Genome-2 Project and what you can do to become involved in their DNA program. As a by-product you will learn your own human origin. This will remove all doubt about the origin of man as you experience your own personal journey. But, to really see the big picture of evolution, I highly recommend cyberspace readers take the time to learn what is in Appendix A on the Geological Timeline. Although there isn’t perfect agreement among scientists as to the exact timeline for any Eon, Era, Period or Epoch it is to date the most solid timeframe for understanding earth formation, continental movements, and the evolution of all species, including human evolution. There is no more exciting place to be than in the sciences that show us who we are and where we come from.


In my opinion Charles Darwin was the greatest scientist of the 19th Century when he provided what was then an earth-shaking revelation that man descended from the great apes. The Christian world went apoplexy with this revelation. And even various segments of the secular world wanted proof beyond a reasonable doubt. In Darwin’s day, there was no high level knowledge of DNA, the fine points of biochemistry, or extensive specific knowledge of the genetic transmission of human characteristics through one’s genes. For that matter even the fine points of evolutionary theory itself had not yet been revealed through scientific research. Also, the work of paleontologists was just getting started, meaning that the digging up of fossils that support Darwin’s theory would be many decades in the making.

A healthy skepticism is always a good approach for any scientist to take where his work is concerned. But now, after 150+ years and thousands of studies in many diverse scientific fields, evolution is not discussed any longer as a theory—but as fact. Why? Because there is an overwhelming convergence of the scientific evidence that species do evolve and primarily by the mechanisms Charles Darwin originally proposed. All of us need to remember that “truth” is what we agree it is, nothing more and nothing less. Scientific consensus building sees the concept of truth in the same way. And, the consensus is—evolution is very real. It has the scientific community supporting it, based on accumulated evidence and a 150+ year convergence of that evidence.

Our human ego would like us to believe that our one particular species is uniquely special but, unfortunately, we are neither unique nor really that different or special compared to other species. I’ll never fly on my own power, move faster than a shark through the water, or outrun a cheetah. And, the Bald Eagle  has better eyesight than any human on the planet. Case closed on superiority of our species. But, we do have a complex larger brain than all other species, and it’s about time we start using it rather than dwell on super-natural hypotheses that explain nothing and are devoid of any real scientific data or evidence.

Society’s scientific question these days is no longer whether evolution occurred, but rather centers upon unraveling the complexity of how it occurred. And indeed, there is still much complexity in this area of scientific research. I want, in this three part series, to bring the reader up-to-date on our human origins. Below is a preliminary glance at our origins and I pose a significant social question for you to think about.

All people alive on earth today originated from Mitochondrial Eve. That sort of makes us all brothers and sisters under the sun and all related by our family tree going back to Mitochondrial Eve. As the late Rodney King so famously said in 1991, “Why can’t we all just get along?”  The astute observer will however say, we don’t fight because of genetics or our human origins, we fight because of culture and cultural differences. This is very true, but I do have one question. Rodney King’s question reaches out to all of us to answer his question, even hypothetically. Hypothetically speaking, values of different groups have to change, less we destroy ourselves eventually through nuclear war. Can our human survivor instinct mitigate cultural differences through cooperation and understanding? Perhaps the current explosion of worldwide communication technology, and a burgeoning of the global economy, will one day pave a path to cooperation and understanding, or perhaps not. What do you think? In the meantime let’s get back to our collective human origins.


The Evidentiary Trail of Evolution

We have fossils…We win!

Lewis Black, on creationism (Comedian, 1948–present)


Charles Darwin was not the first person to suggest a theory of evolution. What made it work for Darwin was that he was an effective synthesizer of existing information. It remained for him to assemble all the data and to construct an unassailable theory.[1]What also set Darwin apart from many others was that he collected a vast amount of data on his own to buttress the theory of evolution. He knew this was important because of an essentially skeptical reception many scientists encounter.[2]

Two people contributed a great deal to the concept of evolution prior to Charles Darwin. One was Charles Darwin’s grandfather Erasmus Darwin (1731-1802). Charles Darwin’s grandfather was a physician, philosopher, poet, and celebrated personality.[3] In his writings between 1784 and 1802 Erasmus Darwin posed two important questions: first, whether all living creatures are ultimately descended from a single common ancestor; and second, how species would be transformed.

In order to answer the first question he assembled evidence from several fields [embryology, comparative anatomy, systematics, geography, and fossil data] for a single source of all life, an evolving web of life that included mankind.[4] “This after all was in keeping with the eighteenth-century classification of all animals and plants into families, genera and species by the Swedish botanist Carolus Linnaeus (1707-1778), who classed Homo sapiens as a close relative of old World monkeys and the apes–although scientists and theologians alike had exerted great efforts to extricate mankind from this unseemly association.”[5]

The second question was more difficult to answer. However, Erasmus Darwin’s treatment of the problem contained the seeds of almost all of the important principles of evolutionary theory.[6] According to Richard L. Leakey and Roger Lewin, “He saw that competition and selection were possible agents of change; that over-population was an important factor in sharpening competition; that plants should not be left out of evolutionary theory; that competition of males for females has important structural implications in their evaluation; and that fertility and susceptibility to disease were areas of selection.”[7]

Another historical person to impact evolutionary theory before Charles Darwin was Jean Baptiste Lamarck (1744-1829) who took up Erasmus Darwin’s mention of inheritance of acquired characteristics and expanded it to a fully-fledged theory of evolution.[8] Erasmus Darwin did not definitively state that the principal agent of evolution is passive adaptation through natural selection, but he seemed to say that animals may evolve through active adaptation to their environment, including the inheritance of acquired characteristics.[9]

One of the books Charles Darwin took on his voyage on the Beagle was Charles Lyell’s   Principles of Geology. He is known as the father of modern geology. Lyell had revised James Hutton’s Theory of the Earth which established the thesis that came to be known as uniformitarianism. The thesis came out of a debate concerning the origin of fossils. The major issues were that of attributing a correct age to the earth and the age of the strata from which fossils were retrieved. This is because fossils don’t determine their own age, but the geological strata in which fossils are found does.[10] Such prior work before Darwin established that evidence was mounting to the inescapable conclusion that Homo sapiens inhabited a planet of great antiquity.[11]

All of these early research findings on geology, the age of the earth, and early notions on evolution all came together with Charles Darwin’s data collection. Subsequent publication of his book on evolution directed everyone’s attention to answering the two most critical questions of that era: Who are we, and where do we come from? These questions continue to be asked today.

Who Are We? Where do we come from?

These are the most critical questions of this or any other era. Mankind has been on this earth only a short time when one considers that the age of the earth is estimated to be 4.6 billion years old. Homo sapiens have been on this earth perhaps only 1½ million to 2 million years. That is a drop in the bucket of geological time. At the top it is important to know that humans share their genetic material with every other creature on earth going back to the very beginning when life first began.

However, the degree of closeness to other species depends upon how much genetic material we share. Consequently, for mankind, we share the greatest amount of genetic material with the great apes. The subfamily Homininae is divided into gorillas (Gorillini) and (Hominini) which are chimpanzees and humans. Homininae is a subfamily of Hominidae, including Homo sapiens and some extinct relatives, as well as the gorillas and the chimpanzees.[12]

They comprise all those hominids, such as Australopithecus, which arose after the split from the other great apes. Humans and gorillas and chimpanzees are all closely related. “As of 1980, the family Hominidae contained only humans, with the great apes in the family Pongidae. Discoveries led to a revision of classification, with the great apes (now Pongidae) and humans (Homininae) united in Hominidae.”[13] The term “Great Apes” refers to gorillas, chimpanzees, gibbons, and orangutans.

But discoveries indicated that gorillas and chimpanzee are more closely related to humans than they are to orangutans, hence their current placement in Homininae. Evolution is not necessarily progress. It is fortuitous and highly random. “Humans arose, rather, as a fortuitous and contingent outcome of thousands of linked events, any one of which could have occurred differently and sent history on an alternate pathway that would not have led to consciousness.”[14]

Stephen J. Gould cited four examples of this. They included: (1) If our inconspicuous and fragile lineage had not been among the few survivors of the initial radiation of multicellular animal life in the Cambrian explosion 530 million years ago, then no vertebrates would have inhabited the earth at all,[15] (2) If a small and unpromising group of lobe-finned fishes had not evolved fin bones with a strong central axis capable of bearing weight on land, then vertebrates might never have become terrestrial,[16](3) If a large extraterrestrial body had not struck the earth 65 million years ago, then dinosaurs would still be dominant and mammals insignificant (the situation that had prevailed for 100 million years previously),[17] and (4) If a small lineage of primates had not evolved upright posture on the drying African savannas just two to four million years ago, then our ancestry might have ended in a line of apes. That, like the chimpanzees and gorillas today, would have become ecologically marginal and probably doomed to extinction despite their remarkable behavioral complexity.[18]  Said again, our survival to where we are today was accidental, random, and based on a non-purposeful journey that was indeed perilous.

In terms of geological time, the amount of time Homo sapiens have been on earth is very short. What most people believe about evolution is that one cell living organisms followed a linear path to human complexity. But that viewpoint is incorrect. Evolution is best thought of as a tree with many radiating branches. Just as new species evolve, so do new branches emerge on a tree. Mankind evolved on a very recent branch.

Earlier Branches of Evolution

When the dinosaurs died out 65 million years ago it paved the way for mammals. Mammals are vertebrates. They have a backbone which encloses a sheath of nerves that leads in turn to a brain in a box or skull. They also have four limbs and special pentadactyl ends to those limbs (5 fingers and toes).[19] Mankind’s descent came up this branch of the tree.

The Eocene is the second epoch of the Cenozoic. The Eocene started approximately 56 million years ago, and lasted approximately 20 million years. During this epoch the first primates evolved.[20] By the end of this epoch most of the modern order of mammals had evolved. The Eocene is regarded as the warmest epoch of the Tertiary. Increasingly these warm conditions at the start of the Eocene caused the extinction of some prominent species of the prior epoch. But, overall, land mammals flourished and new species diversified and adapted. In particular, mammals with a keen sense of smell thrived in the dense forests and warm conditions.[21]

Primates from this epoch evolved many interesting features including grasping hands and feet that had nails rather than claws. “One of the most influential developments is the primate reliance on sight rather than smell that evolved around fifty-five million years ago. These primates were abundant on several continents, but were eventually absent from South America and Antarctica. Both of the modern suborders of primates originated in the Eocene, or possibly in the late Paleocene.

One suborder includes Lemers and Lorises (Stropsirhines). The descendents of these species still thrive in the tropical forests of Africa, Madagascar, and Asia. The other suborder (Haplorhines) includes the higher primates, such as the monkeys, apes and humans, which are often referred to as ‘anthropoid primates.’”[22]

Humans, apes, and monkeys are the living descendants of the first anthropoids, which evolved in the Eocene.[23] “Anthropoids diversified greatly during the late Eocene and the Oligocene. The boundary between the two epochs is marked by a 10-million-year-long fluctuation in climate and environment.”[24] “One of the environmental results was the shrinking of the subtropical and tropical forests. These forests, which had fostered the adaptive radiation of species, started their retreat to the modern tropical zones.”[25]

“As a result of these changes, grasses evolved, which greatly influenced the evolutionary history of land mammals. At the end of the Eocene the primates of the Northern Hemisphere nearly disappeared; many primate species took refuge in Africa and Arabia. It was in this refuge area where the ancestors of Old World monkeys and apes evolved in the Oligocene.”[26]

“Comparisons of DNA show that our closest living relatives are the ape species of Africa, and most studies by geneticists show that chimpanzees and humans are more closely related to each other than either is to gorillas. However, it must be emphasized that humans did not evolve from living chimpanzees. Rather, our species and chimpanzees are both the descendants of a common ancestor that was distinct from the Africa apes.

This common ancestor was thought to have existed in the Pliocene between 5 and 8 million years ago, based on the estimated rates of genetic change. Both of our species have since undergone 5 to 8 million years of evolution after the split of the two lineages. Using the fossil record, scientists have attempted to reconstruct the evolution from the common ancestor through the series of early human species to today’s modern human species.”[27]

Paleontology comes of Age

In terms of scientific classification, where are we as a species? Based on the Linnaeus system the Genus Homo:[28]

Kingdom:  Animalia

Phylum:     Chordata

Class:         Mammalia

Order:        Primates

Family:      Hominidae

Subfamily: Homininae

Genus:        Homo

Homo is the genus that includes modern humans and their close relatives. The genus is estimated to be between 1.5 and 2.5 million years old. All species except Homo sapiens are extinct. Homo neanderthalensis, died out 30,000 years ago, while recent evidence suggests that Homo floresiensis lived as recently as 12,000 years ago. The word Homo is Latin for “man,” in the original sense of “human being.”

The first major discovery began with the finding of Neanderthal man in 1856. The idea that humans are similar to certain great apes had been obvious to people for some time; however, not until the idea of biological evolution of species in general, it was not legitimized until after Charles Darwin published On the Origin of Species in 1859. Although Darwin’s first book did not address the specific question of human evolution his next book, Descent of Man, did.[29]

Since the time of Carolus Linnaeus, the great apes were considered the closest relatives of human beings, based on morphological similarity. In the 19th Century it was speculated that our closest living relatives were the chimpanzees and gorillas, and based on the natural range of these creatures, it was surmised humans share a common ancestor with African apes and that fossils of these ancestors would ultimately be found in Africa.

Many of the ideas of Darwin were considered controversial. Even some of his supporters like Alfred Russell Wallace and Charles Lyell didn’t want to accept the idea that human beings could have evolved their apparently boundless mental capacities and moral sensibilities through natural selection.

Human Evolution is the Story

Evolutionary science is as much about the people who make discoveries as about the famous discoveries themselves. Nowhere is this true than when describing the history of discovery since the 19th Century. Those readers interested in details of the timeframe for evolution since the beginning of the earth please review the detail in Appendix A. I’d like to conclude Part I of this blog with the life of Charles Darwin.


The Life of Charles Darwin


One of the greatest scientists of the 19th Century was Charles Darwin. In fact a few years back the Science Channel produced a program that selected by scholars, researchers, and scientists the top 100 greatest scientific discoveries of all time. Through the help of Discover Magazine they decided to survey the public’s beliefs regarding ranking all 100 discoveries. They showcased in a program the results of the survey for the top 10 greatest scientific discoveries from among the 100. Although Albert Einstein held two positions among the top ten discoveries, it was the work of the first evolutionary biologist, Charles Darwin, which the public ranked #1 as the greatest scientific discovery of all time.

The influence of Darwin’s Theory of Evolution forever altered the sciences, religion and human culture as well. Many scientists today collect and analyze data. But in his era Charles Darwin was a “data collector extraordinaire.”  He authored as a naturalist many scientific books, but his work on the theory of evolution and natural selection was best described in Origin of Species (1859) and the Descent of Man (1871). Darwin did not rush to judgment on his research findings. He took more than 20 years following the conclusion of his travels on the H.M.S. Beagle before he published his works on evolution.

Like many scientists before him Darwin believed that all life on earth evolved over millions of years from a few common ancestors. Upon his return from the 5-year voyage on the Beagle he conducted thorough research of his notes and specimens. Based on his research findings he concluded that:

(1) Evolution did occur

(2) Evolutionary change was gradual, requiring millions of years

(3) The primary mechanism for evolution was a process called natural selection

(4) The millions of species alive today arose from a single original life form through a branching process called specialization.


Who really was Charles Darwin? Who was the man who caused a firestorm of controversy and turned religion into a train wreck, up-side-down and on its side? Charles Darwin was a mild mannered man who once studied for the ministry and never wanted to bring down the controversy upon him. But even after 150+ years since his publication, his work is still attacked and maligned. In the hiatus between Darwin’s time and now, scientific research from paleontology, genetics, geology, zoology, botany, biology, chemistry, ecology, astrophysics, anthropology, and various subfields within these sciences have all found either correlative, corroborating evidence and, at times,  overwhelmingly direct scientific proof for Darwin’s Theory of Evolution and natural selection.

Because of the data obtained, and the huge scientific findings in support of it, the late scientist Carl Sagan concluded in his special series Cosmos that evolution isn’t just a theory–it is fact. The scientific consensus building has been profound. The pieces to the puzzle of evolution (and there have been thousands upon thousands of them over the last 150+ years) has produced a picture that clearly shows the processes of evolution at work among all species, living and dead.

Charles Robert Darwin was born on February 12, 1809 in Shrewbury, England.[30] He was born into the family of Robert Waring Darwin as their second son of five children. His mother was the former Susannah Wedgwood. She died when he was only eight years old, and he was raised by his older sister. The young Charles nonetheless was very fortunate economically. His father was a doctor and his mother was the daughter of Josiah Wedgwood, who was a wealthy founder of pottery works. His grandfather was a scientist who wrote Zoonomia, or The Laws of Organic Life. Because of his background he was not expected to work for a living but use his education and talents in the professions.[31]

Early on it was clear choosing a career would prove difficult. Charles Darwin tried medicine, like his father. He studied at Edinburgh but hated it. He watched surgery, but in those days (1820s) anesthesia didn’t exist. Charles Darwin’s observation of such surgery horrified and repulsed him. It was clear to Charles and his family that medicine was not going to be his career. In 1828 he went to Cambridge (Christ’s Church) to train for the ministry.[32]

At Cambridge Charles became friends with a professor, John Stevens Henslow where he developed a great interest in zoology and geography. It was here that he began to love collecting plants, insects, and geological specimens. Charles Darwin knew that the backbone and cornerstone of science was first-and-most observation coupled with data collection.

It was professor Henslow who arranged for Charles Darwin to be invited to go on a surveying expedition on the HMS Beagle.[33] The ship’s captain needed a naturalist to serve onboard. The Beagle had been outfitted by the admiralty for an extended voyage to the South Seas. Their intention was to survey coastal South America. The ship was also equipped for ‘scientific purposes.’ Captain FitzRoy was only twenty-six and the admiralty thought he needed a young companion who could relieve the isolation of command, and share the captain’s table as messmate. Although Charles Darwin was not yet a ‘finished naturalist’ Henslow thought he was a good choice to make the voyage.

In South America Darwin found fossils of extinct animals that were similar to modern species. He studied plants and animals everywhere he went, always collecting specimens. On the Galapagos Islands in the Pacific Ocean he found many variations among plants and animals of the same general type as those in South America. It was described in the last chapter that one outcome from macroevolution was speciation.

Darwin was amazed by the diversity of finches on the Galapagos Islands. Each species of finch had a unique beak tailored to its specific diet. He theorized that the dozen or so variations arose from a single ancestor whose descendants spread out and adapted to different conditions, eventually evolving into separate species. He found different types of species of finches falling into three different groups: (1) Seed eaters who were members of a ground-feeding group who had strong bills that were adapted to crushing seeds, (2) Insect eaters whose larger beak species caught big bugs while smaller beak birds focused on little insects, and (3) the Bud eaters who used their stubby beak to grasp buds, blossoms and fruits (vegetarian finch).[34]

The voyage was originally set for 2 years but was extended. The entire voyage of the HMS Beagle took five years and lasted from 1831 to 1836. Besides South America and southern islands, Australia was added to the voyage. On the Beagle he served as a geologist, botanist, zoologist, and general man of science.[35]

Darwin’s research had a tremendous impact on religious thought, and still does to the present day. Many people strongly oppose the idea of evolution (although unable to refute it on scientific grounds) do so because it conflicts with their religious convictions. Darwin, a reserved, thorough, and hard-working scholar and scientist avoided talking about the theological aspects of evolutionary theory.

Post Script

Physiological evolution begins to now have a new partner–cultural evolution, including the appearance of ancient polytheistic and pagan cults and (animism and shamanism) type religions. Some of these developments took place during the last 190,000 years of the previous epoch, the Pleistocene. But the Holocene witnessed the culmination of the new one god concept on the block–monotheism, first Yahwey, and then Second Isiah’s pronouncement of a universal God. These extensions of the primitive cults and religions into monotheism didn’t occur or surface until approximately 2000 B.C.E. Atheism, the newest kid on the block of culture, didn’t occur until The Age of Enlightenment. And, all cultural evolution, like human physical evolution, continues to expand and evolve.

Appendix A

Geological Timeline

Geological time is divided into eons, followed by eras. This is followed by periods within eras, then finally epochs within periods. The first eon is known as the Hadean Eon at 4.5 to 3.9 billion years ago. The Archean Eon lasted from 3.9 billion to 2.5 billion years ago. The Proterozoic eon lasted from 2.5 billion to 540 million years ago. Collectively, these three eons represent Precambrian Time.[36]

Definitions of Terms[37]:

EON – Two or more geological eras form an eon, which is the largest division of geologic time, lasting many hundreds of millions of years.

ERA – Two or more geologic periods comprise an era, which is hundreds of millions of years in duration.

PERIOD – The period is the basic unit of geological time in which a single rock system is formed, lasting tens of millions of years.

EPOCH – An epoch is a division of a geologic period; it is the smallest division of geologic time, lasting several million years.

AGE – An age is a unit of geological time which is distinguished by some feature (like an Ice Age). An age is shorter than epoch, usually lasting a few million years to about a hundred million years.

What do scientists know about each of the geological time frames (eons, eras, periods, and epochs that can help one understand how life began on earth? During the Hadean Eon (4.5-3.9 billion years ago) the Earth formed a solid planet. There is no evidence of life on earth at that time.[38]   During the Archean Eon (3.2-2.5 billion years ago) the Earth’s permanent crust began to form. Vast amounts of metallic minerals were deposited. The oceans and atmosphere resulted from volcanic out-gassing. The earliest life forms evolved in the seas. They are called prokaryotes. These are single-celled organisms with no nucleus–cyanobacteria (blue-green algae). The earliest bacteria obtained energy through chemosynthesis (ingestion of organic molecules).[39]

During the Proterozoic Eon (2.5 billion years ago–540 million years ago) plate tectonics began to slow to the same rate as today. Large mountain chains began to form as continents collided. Quartz-rich sandstones, shales, and limestones were deposited over the continents. Oxygen levels increased as life on earth developed the ability to obtain energy through photosynthesis.[40]

Soon Eukaryotes, which are single-celled organisms with a nucleus, began to evolve. During this eon more advanced forms of algae and a wide variety of protozoa evolved. Eukaryotes reproduced sexually, which makes genetic diversity possible. This led to their ability to adapt and to survive environmental changes. Multi-celled, soft bodied marine organisms (metazoans) evolved.[41] Sexual reproduction evolves and leads to an explosion in the rate of evolution. While most life occurs in oceans and lakes, some cyanobacteria may already have lived in moist soil by this time.

At 600 mya sponges (Pozifere), jellyfish (Cnidaria), flat worms Platyhelminthes and other multicelluar animals appear in the oceans. Cnidaria and Ctenophora are among the earliest creatures to have neurons, in the form of a single net–no brain or nervous system.

From 545 million years ago to the present geologic time is divided into three eras: Paleozoic, Mesozoic, and Cenozoic. These are known as the Age of Invertebrates, Age of Reptiles, and Age of Mammals, respectively.[42]




Paleozoic Era (Ancient Life)-Age of Invertebrates

Cambrian Period                                                                            545-505 mya

Ordovician Period                                                                          505-438

Silurian Period                                                                               438-410

Devonian Period                                                                            410-355

Carboniferous (Mississippian/Pennsylvanian) Period          355-290

Permian Period                                                                              290-250



Mesozoic Era (Middle Life)-The Age of Reptiles

Triassic Period                                                                                 250-205

Jurassic Period                                                                                 205-135

Cretaceous Period                                                                            135-65



Cenozoic Era (Recent Life)-The Age of Mammals

Tertiary Period

Paleocene Epoch                                     65-55

Eocene Epoch                                          55-38

Oligocene Epoch                                     38-26

Miocene Epoch                                        26-6

Pliocene Epoch                                         6-1.8

Quarternary Period

Pleistocene Epoch                                 1.8-.01

(Lower Paleolithic)                               0.1-.25

(Middle Paleolithic)                             .25-.06

(Upper Paleolithic)                              .06-.01

Holocene Epoch                                   .01-0



The Paleozoic Era includes six periods. These include the Cambrian, Ordovician, Silurian, Devonian, Carboniferous, and Permian.[43]

Cambrian Period


During the Cambrian sedimentary rocks such as sandstone, shales, limestone, conglomerate formed in shallow seas over the continents. During the period (545-505 million years ago) collisions of the earth’s plates gave rise to a super continent Gondwana.  This is composed of South America, Africa, Antarctica, and Western Australia as well as peninsular India and parts of Arabia.[44]

The global climate is generally mild. Marine metazoans with mineralized skeletons such as sponges, bryozoans, corals, brachiopods, mollusks, anthropods, and echinoderms flourish. Plant life is limited to marine algae; however, one group of anthropods, the trilobites, is particularly dominant in the seas.[45]

Ordovician Period


During the Ordovician Period (505-438 million years ago) North America, Europe and Africa merge. Shallow seas cover most of North America at the beginning of the period, then recede, leaving a thick layer of limestone.  Later in the period, the seas recover North America, depositing quartz, sandstones, and more limestone.[46]   As far as life is concerned invertebrates are still the dominant form of life on earth. Corals, crinoids, and small clams evolve, as well as the first early vertebrates–primitive fish with bony armor plates. Late in the Ordovician Period, mass extinction of marine life occurs, opening niches for beneath (bottom-dwelling) and planktonic (floating, swimming) organisms.[47]

Devonian Period

The Devonian period is often referred to as “The Age of Fish.”  Various forms of fish dominated the seas including sharks, lungfish and armored fish. Europe and North America merge, forming the northern part of the ancestral Appalachian mountain range.

Europe and North America strattle the equator, while Africa and South America are positioned over the South Pole.[48]  The climate during the Devonian is generally warm and moist. Ammonites evolved from Nautiloids and became one of the invertebrate forms. As ozone forms in the atmosphere, the first air-breathing anthropods–spiders and mites–venture onto the land. Amphibians evolve and plant life, including lowland forests of giant Psilphyta plants, develop and spread over the planet.[49]

Carboniferous Period


During the Carboniferous Period (355-290 million years ago) two major land masses form: Laurasia (North America, Greenland, northern Europe, and Scandinavia) to the north of the equator, and Gondwana (South America, Africa, peninsular India, Australia, and Antarctica) to the south. Collisions between Laurasia and Gondwana occur and form major mountain ranges. Coal-forming sediments are laid down in vast swamps.[50]   Global climatic changes occur changing from warm and wet to cooler and drier. The result is a long interval of glaciation. Ammonites were common in open marine waters. (Cordates), and ferns were common. Insects, such as cockroaches flourish. More fish and reptiles evolve. Land environments are dominated by plants–from small shrubbery growths to tall trees.[51]

Permian Period  


During the Permian Period (290-250 million years ago) Pangaea is formed. It is a single super continent that forms as a result of Earth’s landmasses colliding and merging. Pangaea extends beyond all climatic zones and nearly from one pole to the other.  Pangaea is surrounded by an immense world ocean. During the Permian, extensive glaciations persist in what are now India, Australia and Antarctica.[52]

Hot, dry conditions prevail on the super continent of Pangaea, and desserts become widespread.  At the beginning of the Permian invertebrate marine life is rich. Toward the end of this period, mass extinctions occur among large groups of corals, bryozoans, and other invertebrates. The last of the trilobites become extinct. Life on the terrestrial environment changes as insects evolve into their modern form as dragonflies and beetles appear.[53]   Amphibians decline in number, but reptiles undergo an evolutionary development of carnivorous and herbivorous land and aquatic forms. In terms of plant life ferns and conifers persist in the cooler air. [54]



The Mesozoic is known as the Age of Reptiles. It consists of three periods known as the Triassic, Jurassic, and Cretaceous.[55]

Triassic Period


The Triassic Period occurred between (250-205 million years ago). Pangaea covers nearly a quarter of the earth’s surface. The Triassic, unlike earlier periods, is marked by few geological events. Toward the end of the Triassic Period, continent rifting begins to break apart the super continent.[56]   The general climate of the Triassic is warm, becoming semiarid to arid. As most children and adults are fascinated by what happens during the Triassic i.e., early dinosaurs evolve. Many are fast, bipedal, and relatively small.[57]

Jurassic Period


The Jurassic Period lasted from 205-135 million years ago. During this period the Atlantic Ocean begins to form as North America separates from Africa and South America. Tectonic plate subduction along western North America causes the Earth’s crust to fold, and mountains form in the western part of the continent.[58]

During this period reptiles adapt to life in the sea, the air, and on land. On land dinosaurs are the dominant reptile. Mammals were small, shrew-like animals. Archaeopteryx, the first bird, appears. Early amphibians extinct by the late Triassic are succeeded by the first frogs, toads, and salamanders. Plant forms are dominated by the cycads and cycadeoides. Conifers and gingkoes become widespread.[59]

Cretaceous Period


The Cretaceous Period was from 135 to 65 million years ago. The continents, while not in their current position on earth, are shaped much as they are today. South America and Africa separate, and the North Atlantic Ocean widens. A circu-equatorial sea–Tethys-forms between the Northern and Southern Hemisphere continents. The westward movement of North America forms the ancestral Rocky Mountains and the ancestral Sierra Nevada. Sea levels rise, submerging about 30% of the Earth’s present land surfaces. The global climate is generally warm. The poles are free of ice.[60]

Dinosaurs and other large reptiles peak as the dominant vertebrate life form on earth. Dinosaurs extend their range throughout every continent. Horned dinosaurs are common, as armored ankylosaurs and spiky nodosaurs. In the shallow seas, invertebrates live in great diversity. Ammonites are a dominant group. Gastropods, corals, sea urchins flourish.[61] The early flowering plants (angiosperms, modern trees) and many modern types of insects evolve.

Near the end of the Cretaceous Period, several mass extinctions occur of five major reptilian groups.  These include the dinosaurs, pterosaurs, ichthyosaurs, pleisosaurs, and mosasaurs. Extinctions also occurred among ammonites, corals, and other invertebrates.[62]


(Recent Life)


The Cenozoic Era is also known as The Age of Mammals.  There are two periods: Tertiary and the Quaternary. During the Tertiary there were five Epochs: Paleocene, Eocene, Oligocene, Miocene, and Pliocene. [63]


Paleocene Epoch


During the Paleocene Epoch (65-55 million years ago) the vast inland seas of the Cretaceous Period dry up, exposing large land areas of North America and Eurasia. Australia begins to separate from Antarctica, and Greenland splits from North America. A remnant Tethys Sea persists in the equatorial region. Mammals diversify and spread into all major environments. Placental mammals eventually dominate the land, and many differentiated forms evolve, including early ungulates (hoofed animals), primates, rodents and carnivores.[64]

Eocene Epoch


During the Eocene Epoch (55-38 million years ago) plate tectonics and volcanic activity form the Rockies in western North America. Erosion fills basins, laying bauxite deposits in western North America. Continental collisions between India and Asia culminate in the Alpine-Himalayan mountain system.[65] Antarctica and Australia separate and drift apart. The climate is subtropical and moist throughout North America and Europe. Early forms of horse, rhinoceros, camel, and other modern groups such as bats evolve in Europe and North America. Creodonts and ruminant ungulates, a strange carnivorous group, evolve.  Cetaceans–baleen whales, toothed whales, dolphins evolve from terrestrial meat eating ungulates. Sirenians (dugongs and manatees) first evolve in the shallow Tethys Sea.[66]

Oligocene Epoch


During the Oligocene Epoch (38-26 million years ago) tectonic plate movement is still very dynamic. Africa and Europe squeeze together, closing the Tethys Sea and leaving as a remnant the Mediterranean Sea. Volcanism and fragmentation of western North America is associated with the emplacement of major ore deposits. The climate was generally temperate. Glaciation begins in Antarctica.

Modern mammals become the dominant vertebrate life forms including horses, pigs, true carnivores, rhinoceroses, elephants, and camels. Oreodonts diversify in North America. Early primates appear in North America and early apes appear in Egypt. Many archaic mammals become extinct. The world’s oceans teem with more vertebrate life. Grasslands expand, and forest regions diminish.[67]

Miocene Epoch


During the Miocene Epoch (26-6 million years ago) modern ocean currents are essentially established. A drop in sea level isolates and dries up the Mediterranean Sea, leaving evaporite deposits on its floor. The climate is generally cooler than the Oligocene Epoch.[68]  A cold transantarctic ocean current isolates the waters around Antarctica, and the continent becomes permanently frozen. The many forms of mammals are essentially modern, and almost half of placental mammal families are present, as well as the early seals and walruses. Many modern birds such as herons, rails, ducks, eagles, hawks, crows, sparrows, are present in Europe and Asia.[69]


Higher primates undergo substantial evolution and advanced primates, including apes, are present in Southern Europe and Asia. Carcharocles Magalodon, the largest predatory shark ever to have lived, inhabits the seas. The coasts are submerged and kelp forests appear. On land, spreading grasslands replace forests over large areas on several continents.[70]

Pliocene Epoch

During the Pliocene Epoch (6 to 1.8 million years ago) the Isthmus of Panama changes ocean circulation patterns, and leads to the formation of an Arctic ice cap. Plate tectonic interactions result in the uplift of the Sierra Nevada, and formation of the Cascade Range, and onset of strike-slip faulting of the San Andreas Fault. Subduction of the Pacific Plate elevates the Sierra Nevada and the volcanic Cascade Range.[71]

In Europe, the Alps continue to rise. The global climates become cooler and drier. During this epoch camels and horses are abundant throughout North America. Proud Sloths also appear. In general, Pliocene mammals are larger than those of earlier epochs. Primates continue to evolve, and the Australopithecines–antecedents to Homo sapiens–develop late in the Pliocene in Africa. In North America, rhinoceroses become extinct.[72]

QUARTERNARY PERIOD: Pleistocene and Holocene Epochs


Pleistocene Epoch


The Pleistocene Epoch occurred between 1.8 million and 10,000 years ago. This epoch is best known as the “Great Ice Age.” Ice sheets and other glaciers encroach and retreat during four or five separate glacial periods. At its peak, as much as 30% of the Earth’s surface is covered by glaciers, and parts of the northern oceans are frozen. The movement of the glaciers alters the landscape. Lakes, such as the Great Lakes in North America are formed by ice retreats.[73] The oldest species of Homo–Homo habilis–evolves.

The flora and fauna of the regions not covered by ice were essentially the same as those of the earlier Pliocene Epoch. Mammal evolution includes the development of large forms: wooly mammoth, wooly rhinoceros, musk ox, moose, reindeer, elephant, mastadon, bison, and ground sloth. In the Americas, large mammals, such as horses, camels, mammoths, mastadons, saber-toothed cats, and ground sloths, are entirely extinct by the end of this epoch.[74]

Holocene Epoch


During the Holocene Epoch (10,000 years ago to the present) it is an interval between glacial incursions that were typical of the Pleistocene Epoch, and therefore, not a separate epoch in itself. However, it is a period marked by the presence and influence of Homo sapiens. During this time, the glaciers retreat, the climate warms, and deserts form in some areas. Human civilization develops. Activities of mankind begin to affect world climates. The extinction of other species continues.[75]

[1] Richard Leakey and Roger Lewin, Origins , (E.P. Dutton: New York, 1977), 28

[2] Ibid.

[3] Ibid, 25

[4] Ibid.

[5] Ibid.

[6] Ibid.

[7] Ibid.

[8] Ibid, 26, 28

[9] Ibid, 25

[10] Ibid.

[11] Ibid, 24

[12] “Homininae,” Wikipedia, the free encyclopedia [online];accessed 26 Jun. 2005;available from  http://en.wikipedia.org/wiki/Hominine.

[13] Ibid.

[14] Stephen Jay Gould, “The Evolution of Life on Earth,” [online]; accessed 14 Jan. 2006; available from http://www.geocities.com/CapeCanaveral/Lab/2948/gould.html.

[15] Ibid.

[16] Ibid.

[17] Ibid.

[18] Ibid.

[19] “The Evolution of Mammals,” [online]; accessed 1 Jul. 2005; available from http://www.earthlife.net/mammals/evolution.html.

[20] “Geologic Time: The Eocene,” [online]; accessed 1 Apr. 2005; available from http://www.mnh.si.edu/anthro/humanorigins/faq/gt/cenozoic/eocene.htm.

[21] Ibid.

[22] Ibid.

[23] Ibid.

[24] Ibid.

[25] Ibid.

[26] Ibid.

[27] “Human Ancestors Hall: Our Primate Origins,” [online]; accessed 1 Apr. 2005; available from http://www.mnh.si.edu/anthro/humanorigins/ha/primate.html.

[28] “Homo (genus)” Wikipedia, the free encyclopedia, [online]; available from http://en.wikipedia.org/wiki/Homo_(genus).

[29] “Human Evolution,” Wikipedia, the free encyclopedia, [online]; accessed 19 Jan. 2006; available from http://en.wikipedia.org/wiki/Human_evolution.

[30] “Biography of Charles Darwin,” [online]; accessed 8 Nov. 2004; available from  http://www.lib.virginia.edu/science/parshall/darwin.html.

[31] “Charles (Robert) Darwin (1809-1882),” [online]; accessed 12 Feb. 2005; available from  http://www.kirjasto.sci.fi/darwin.htm.

[32] “The Scientists: Charles Darwin,” [online];accessed 12 Feb. 2005;available from http://www.plupete.com/Literature/Biographies/Science/Darwin.htm.

[33] Ibid.

[34] Jerry Adler, “Charles Darwin–—Evolution of a Scientist,” Newsweek, 28 Nov. 2005, 54-55

[35] Ibid.

[36] “San Diego Natural History Museum–—Geologic Time Line,” [online]; accessed 2 Aug. 2005; available from  http://www.sdnhm.org/fieldguide/fossils/timeline.html.

[37] “Geologic Time Periods,” [online]; accessed 2 Aug. 2005;available from http://www.enchantedlearning.com/subjects/dinosaurs/glossary/Geologictimeperiods.shtml.

[38] Ibid.

[39] Ibid.

[40] Ibid.

[41] Ibid.

[42] Ibid.

[43] Ibid.

[44] Ibid.

[45] Ibid.

[46] Ibid.

[47] Ibid.

[48] Ibid.

[49] Ibid.

[50] Ibid.

[51] Ibid.

[52] Ibid.

[53] Ibid.

[54] Ibid.

[55] Ibid.

[56] Ibid.

[57] Ibid.

[58] Ibid.

[59] Ibid.

[60] Ibid.

[61] Ibid.

[62] Ibid.

[63] Ibid.

[64] Ibid.

[65] Ibid.

[66] Ibid.

[67] Ibid.

[68] Ibid.

[69] Ibid.

[70] Ibid.

[71] Ibid.

[72] Ibid.

[73] Ibid.

[74] Ibid.

[75] Ibid.

Read Full Post »

Part I

Update on Diabetes in America
[Epidemiology and New Research Findings]


This is Part I of a two-part series on diabetes in this country. Part I will be an epidemiological look at this disease in terms of statistical estimates disaggregated by age, race, and gender.

Part II of the series will concentrate on presenting some of the new research findings as they relate to Type II diabetes, insulin resistance, obesity, and a new factor that is emerging as important—Inflammation.

2011 National Diabetes Fact Sheet

Diagnosed and undiagnosed diabetes in the United States, all ages, 2010

Total: 25.8 million people, or 8.3% of the U.S. population, have diabetes.
Diagnosed: 18.8 million people
Undiagnosed: 7.0 million people

Estimation Methods

The estimates in this fact sheet were derived from various data systems of the Centers for Disease Control and Prevention (CDC), the Indian Health Service’s (IHS) National Patient Information Reporting System (NPIRS), the U.S. Renal Data System of the National Institutes of Health (NIH), the U.S. Census Bureau, and published studies.

The estimated percentages and the total number of people with diabetes and prediabetes were derived from 2005–2008 National Health and Nutrition Examination Survey (NHANES), 2007–2009 National Health Interview Survey (NHIS), 2009 IHS data, and 2010 U.S. resident population estimates.

The diabetes and prediabetes estimates from NHANES were applied to the 2010 U.S. resident population estimates to derive the estimated number of adults with diabetes or prediabetes. The methods used to generate the estimates for the fact sheet may vary over time and need to be considered before comparing fact sheets. In contrast to the 2007 National Diabetes Fact Sheet, which used fasting glucose data to estimate undiagnosed diabetes and prediabetes, the 2011 National Diabetes Fact Sheet used both fasting glucose and hemoglobin A1c (A1c) levels to derive estimates for undiagnosed diabetes and prediabetes. These tests were chosen because they are most frequently used in clinical practice.

Diagnosed and undiagnosed diabetes among people aged 20 years or older, United States, 2010

Age 20 years or older: 25.6 million, or 11.3% of all people in this age group, have diabetes.

Age 65 years or older: 10.9 million, or 26.9% of all people in this age group, have diabetes.

Men: 13.0 million, or 11.8% of all men aged 20 years or older, have diabetes.

Women: 12.6 million, or 10.8% of all women aged 20 years or older, have diabetes.

Non-Hispanic whites: 15.7 million, or 10.2% of all non-Hispanic whites aged 20 years or older, have diabetes.

Non-Hispanic blacks: 4.9 million, or 18.7% of all non-Hispanic blacks aged 20 years or older, have diabetes.

Sufficient data are not available to estimate the total prevalence of diabetes (diagnosed and undiagnosed) for other U.S. racial/ethnic minority populations.

Diagnosed diabetes in people younger than 20 years of age, United States, 2010

About 215,000 people younger than 20 years have diabetes (type 1 or type 2). This represents 0.26% of all people in this age group. Estimates of undiagnosed diabetes are unavailable for this age group.

Racial and ethnic differences in diagnosed diabetes

National estimates of diagnosed diabetes for some but not all minority groups are available from national survey data and from the IHS NPIRS, which includes data for approximately 1.9 million American Indians and Alaska Natives in the United States who receive health care from the IHS. Differences in diabetes prevalence by race/ethnicity are partially attributable to age differences. Adjustment for age makes results from racial/ethnic groups more comparable.
• Data from the 2009 IHS NPIRS indicate that 14.2% of American Indians and Alaska Natives aged 20 years or older who received care from IHS had diagnosed diabetes.

• After adjusting for population age differences, 16.1% of the total adult population served by IHS had diagnosed diabetes, with rates varying by region from 5.5% among Alaska Native adults to 33.5% among American Indian adults in southern Arizona.

• After adjusting for population age differences, 2007–2009 national survey data for people aged 20 years or older indicate that 7.1% of non-Hispanic whites, 8.4% of Asian Americans, 11.8% of Hispanics, and 12.6% of non-Hispanic blacks had diagnosed diabetes. Among Hispanics, rates were 7.6% for both Cubans and for Central and South Americans, 13.3% for Mexican Americans, and 13.8% for Puerto Ricans.

• Compared to non-Hispanic white adults, the risk of diagnosed diabetes was 18% higher among Asian Americans, 66% higher among Hispanics, and 77% higher among non-Hispanic blacks. Among Hispanics compared to non-Hispanic white adults, the risk of diagnosed diabetes was about the same for Cubans and for Central and South Americans, 87% higher for Mexican Americans, and 94% higher for Puerto Ricans.

New cases of diagnosed diabetes among people aged 20 years or older, United States, 2010

About 1.9 million people aged 20 years or older were newly diagnosed with diabetes in 2010.

New cases of diagnosed diabetes among people younger than 20 years of age, United States, 2002–2005

SEARCH for Diabetes in Youth is a multicenter study funded by CDC and NIH to examine diabetes (type 1 and type 2) among children and adolescents in the United States. SEARCH findings for the communities studied include the following:
• During 2002–2005, 15,600 youth were newly diagnosed with type 1 diabetes annually, and 3,600 youth were newly diagnosed with type 2 diabetes annually.

• Among youth aged

• Non-Hispanic white youth had the highest rate of new cases of type 1 diabetes (24.8 per 100,000 per year among those younger than 10 years and 22.6 per 100,000 per year among those aged 10–19 years).

• Type 2 diabetes was extremely rare among youth aged 9%) were 2.9 times more likely to have severe periodontitis than those without diabetes. The likelihood was even greater (4.6 times) among smokers with poorly controlled diabetes.

• About one-third of people with diabetes have severe periodontal disease consisting of loss of attachment (5 millimeters or more) of the gums to the teeth.

Complications of pregnancy
• Poorly controlled diabetes before conception and during the first trimester of pregnancy among women with type 1 diabetes can cause major birth defects in 5% to 10% of pregnancies and spontaneous abortions in 15% to 20% of pregnancies. On the other hand, for a woman with pre-existing diabetes, optimizing blood glucose levels before and during early pregnancy can reduce the risk of birth defects in their infants.

• Poorly controlled diabetes during the second and third trimesters of pregnancy can result in excessively large babies, posing a risk to both mother and child.

Other complications
• Uncontrolled diabetes often leads to biochemical imbalances that can cause acute life-threatening events, such as diabetic ketoacidosis and hyperosmolar (nonketotic) coma.

• People with diabetes are more susceptible to many other illnesses. Once they acquire these illnesses, they often have worse prognoses. For example, they are more likely to die with pneumonia or influenza than people who do not have diabetes.

• People with diabetes aged 60 years or older are 2–3 times more likely to report an inability to walk one-quarter of a mile, climb stairs, or do housework compared with people without diabetes in the same age group.

• People with diabetes are twice as likely to have depression, which can complicate diabetes management, than people without diabetes. In addition, depression is associated with a 60% increased risk of developing type 2 diabetes.

Preventing diabetes complications

As indicated above, diabetes can affect many parts of the body and can lead to serious complications such as blindness, kidney damage, and lower-limb amputations. Working together, people with diabetes, their support network, and their health care providers can reduce the occurrence of these and other diabetes complications by controlling the levels of blood glucose, blood pressure, and blood lipids, and by receiving other preventive care practices in a timely manner.

Glucose control
• Studies in the United States and abroad have found that improved glycemic control benefits people with either type 1 or type 2 diabetes. In general, every percentage point drop in A1c blood test results (e.g., from 8.0% to 7.0%) can reduce the risk of microvascular complications (eye, kidney, and nerve diseases) by 40%. The absolute difference in risk may vary for certain subgroups of people.

• In patients with type 1 diabetes, intensive insulin therapy has long-term beneficial effects on the risk of cardiovascular disease.

Blood pressure control
• Blood pressure control reduces the risk of cardiovascular disease (heart disease or stroke) among people with diabetes by 33% to 50%, and the risk of microvascular complications (eye, kidney, and nerve diseases) by approximately 33%.

• In general, for every 10 mmHg reduction in systolic blood pressure, the risk for any complication related to diabetes is reduced by 12%.

• No benefit of reducing systolic blood pressure below 140 mmHg has been demonstrated in randomized clinical trials.

• Reducing diastolic blood pressure from 90 mmHg to 80 mmHg in people with diabetes reduces the risk of major cardiovascular events by 50%.
Control of blood lipids
• Improved control of LDL cholesterol can reduce cardiovascular complications by 20% to 50%.

Preventive care practices for eyes, feet, and kidneys
• Detecting and treating diabetic eye disease with laser therapy can reduce the development of severe vision loss by an estimated 50% to 60%.

• About 65% of adults with diabetes and poor vision can be helped by appropriate eyeglasses.

• Comprehensive foot care programs, i.e., that include risk assessment, foot-care education and preventive therapy, treatment of foot problems, and referral to specialists, can reduce amputation rates by 45% to 85%.

• Detecting and treating early diabetic kidney disease by lowering blood pressure can reduce the decline in kidney function by 30% to 70%. Treatment with particular medications for hypertension called angiotensin-converting enzyme inhibitors (ACEIs) and angiotensin receptor blockers (ARBs) is more effective in reducing the decline in kidney function than is treatment with other blood pressure lowering drugs.

• In addition to lowering blood pressure, ARBs and ACEIs reduce proteinuria, a risk factor for developing kidney disease, by about 35%.

Post Script

In Part I, data were presented on diabetes in order to give the reader an epidemiological look at this disease. In Part II data will be presented on some of the research looking to understand, or at least better treat, this dreadful disease. It is hoped that as each year passes, researchers will eventually find the cure for diabetes.

Read Full Post »

Placing Value on Life:

 Case in Point—The Never-ending Abortion Issue


     Abortion is one of the most troubling and hotly debated issues of our time. Weighing the social value of life is at the heart of the abortion issue. It remains a permanent fixture of our social landscape because of its unending cultural conflict created by values, politics, and science. The reason for this is that abortion is so intimately intertwined with the social definition of life. Yet, despite the fact many people believe life begins at conception, it is ironic that all life itself is not universally valued. The one underlying universal in all this seems to be that social context (value judgments rendered in different social situations) dictates the relative value placed on life.

Why isn’t life universally valued? Because it all depends upon the differing social contexts related to life itself. Even those who view life as beginning at conception think contextually. This occurs when the life of the mother is at stake, or conception is due to rape or incest. Even with abortion, there is no unconditional universal value placed on all life.

The Value of Life in Different Social Settings

Many issues today have to do with the value that is placed on life in different social settings. Life issues include: abortion, the death penalty, suicide and the right-to-die among the terminally ill, slaughtering animal species so humans can eat and using animals in medical research, stem cell research with human embryos, and killing of enemy combatants during wartime. Lastly, there is murder. In 2010, there were 12,996  murders in the United States.

It is important to know that the value placed on life has always been dictated by social context. That is, conservatives overwhelmingly tend to support the death penalty and willingness for the state to take a human life, yet form the major support group against abortion. Liberals tend to support a ban or moratorium on the death penalty, yet often do not have a problem terminating the life of the unborn. Liberals tend to accept the right to die claims of the terminally ill, and conservatives and the Catholic Church generally see it as wrong or sinful. Both liberals and conservatives buy meat or fish at their local corner grocery store and both have no problem terminating life in the animal kingdom. Both liberals and conservatives serving in a war zone are, if necessary, willing to take the life of an enemy combatant. Even John Q. Citizen will take life in dire situations to protect family members or oneself when threatened.

Any one of the above issues can be explored in more detail as it relates to valuing life. But let’s get back to abortion and social context.

 The Issue

The crux of the abortion issue seems to come down to two opposing sides. They include, “a woman’s right to choose” fostered by liberals and woman’s groups, versus “a right to life” fostered by conservatives and Right-to-Life groups. One of the long-standing groups against abortion has been the Catholic Church. However, many Catholics worldwide, and others, want to overturn the Catholic Church’s long-standing ban on abortion. Abortion nevertheless violates one of the Church’s basic tenets—belief in the sanctity of life. This long-standing stance of the Catholic Church is an appeal to faith, particularly faith in the righteousness of the Church’s position taken.

Regardless of what side one politically comes down on regarding abortion , Right-to-Life groups do have scientific support with their definition of life. And, the best definition of life is a scientific one. That scientific support is more about what constitutes a “life form” than it is as to the precise moment life occurs.   The biological definition of a “life form” is very consistent with the social definition of human life as espoused by Right-to-Life groups.

In the 1940s and 1950s a common social definition of life was life begins at conception. According to scientists six characteristics define what a “life form” is. Not everyone agrees on a social definition of life but there are generally accepted biological manifestations that life exhibits the following phenomena: Organization, Metabolism, Growth, Adaptation, Response to Stimuli, and Reproduction. All six characteristics are required for a population to be considered a life form. Fetuses, in the earliest of their development, manifest all six characteristics.

So why is there such social conflict over the issue of abortion? If biologists have given us a definition of life that is correct, then why is our social definition of life since the 1940s and 1950s so out of whack now? The answer appears to be political, not scientific or data-driven. It’s all about changing social underlying values that often masquerade in political debates as objective argumentation.

The Politics of Abortion

     One of the very strongest forces that contextualizes abortion and other issues where the value of life is concerned, is politics. At the heart of all politics is value judgment. A Women’s right to choose is more than a slogan. Feminists and abortion rights groups reject any political proposals that would seek to protect the unborn. Right-to-Life groups are likewise very political in supporting efforts by Republicans in Congress who put forth anti-abortion legislation.

What makes the abortion issue so beguiling is that the only perceived way groups can assure the rights of their own group is by denying the rights of opposing groups. This entrenchment on both sides of the issue implies that logic and reason are the enemies of both sides; values and value judgment dominate the political debate over abortion. Not surprisingly the biological definition of life carries very little weight in the political debate over abortion. Those for and against abortion prefer to use their own definition of when life begins. It is unfortunately a debate where definition of when life begins continues to produce great division among the public.

     It is for these reasons that the issue of abortion is unlikely to be resolved any time soon. It is very unfortunate that life under all circumstances is not universally valued. But all of these differing social contexts that is our present day reality worldwide, would all fade into dust  if the world ever chooses to engage in war with atomic bombs.

      May that day—never come!  But if that day does arrive, two things are very clear. One, we will no longer have a need to debate whether life begins at conception or birth. And two, on that day, society may finally unconditionally value and appreciate all lifeas it witnesses humanity meet its end in death through a self-inflicted worldwide suicide known as a nuclear annihilation.


Read Full Post »

On the Horizon: A Real Cure for All Cancers?


The world of medicine is changing and nanotechnology is leading the way. The future of cancer diagnosis and treatment does indeed look bright.

Where cancer research and treatment is concerned there is clearly a need to develop new innovative diagnostic and therapeutic methods. During the last 10 years tremendous progress has been made in the development of new molecular imaging probes and therapeutic agents targeting cancer. One such field that has contributed greatly in the area of diagnostic and therapeutic methods is the field of nanotechnology (technology for use on the atomic or molecular level.)  For example, there are now nanoparticle enabled technologies that do a better job of detecting and treating cancer than ever before. There appears to be three goals of these newer technologies:

(1)   Early detection of the disease

(2)   Enhance the ability to monitor therapeutic response, and

(3)   Enable the ability to target delivery of therapeutic agents, like cancer killing drugs.

There are other uses of nanotechnology, but the purpose of this Blog is to focus on the use of nanotechnology in the treatment of the devastating disease of cancer. In my grandmother’s day a diagnosis of cancer was a death sentence. From personal experience, I know that in today’s world that is not necessarily true.



In 2004 I lost a kidney to kidney cancer. I nevertheless was one of the lucky ones because I am a kidney cancer survivor for 8 years now. Consequently, I have a personal stake in finding a cure for cancer (and reoccurrences of same). I was astonished recently (and got goose bumps all over) when I read about a 17- year old high school senior from Cupertino, California who may have found a way to cure cancer.

Her name is Angela Zhang. She has received a $100,000 scholarship for her science school project because of the extraordinary nature of what she was trying to put forth—a comprehensive self-contained way to use nanoparticles to isolate and treat all cancer tumors, while leaving healthy tissue and cells alone. The $100,000 Zhang earned comes with the first prize award in the Siemens Competition in Math, Science & Technology.

In my opinion, Angela Zhang is not necessarily a super-genius. But she is a very bright, precocious, and persevering young person who demonstrated an uncanny ability to logically synthesize existing research data, and ideas from specialized scientific fields. In this case, she researched the field of nanotechnology, and more specifically she emphasized synthesizing information from the sub-field of medical nanotechnology.

But, of course, what she did wasn’t only a clever assimilation of research ideas from the scientific literature. There was that hands-on 1,000 hours creating the nanoparticle, and figuring out how to integrate a drug delivery system at the micron level that could be closely monitored for its effects. And, she achieved a very important aspect of cancer treatment—delivering a cancer drug without damaging healthy cells and tissues.  Young people like Angela will one day be at the forefront of research trying to solve many of the complex health problems facing large populations of citizens everywhere. I am excited as an individual that serious medical problem-solving is now transitioning to the next generation who possess innovative ideas and who have the perseverance to build a scientific consensus around the most effective ways to diagnose and treat serious diseases.

Complex medical problems like cancer deserve a bit more in-depth reporting.  Therefore I will present this particular blog in three sections: (1) a review of worldwide and national statistics on the prevalence of cancer, (2) describe Angela Zhang’s science project and concepts, and (3) describe a promising future where nanotechnology is concerned.

Section 1

Cancer Statistics from the World Health Organization

Q: Are the number of cancer cases increasing or decreasing in the world?

A: Cancer is a leading cause of death worldwide and the total number of cases globally is increasing.

The number of global cancer deaths is projected to increase 45% from 2007 to 2030 (from 7.9 million to 11.5 million deaths), influenced in part by an increasing and aging global population. The estimated rise takes into account expected slight declines in death rates for some cancers in high resource countries. New cases of cancer in the same period are estimated to jump from 11.3 million in 2007 to 15.5 million in 2030.

In most developed countries, cancer is the second largest cause of death after cardiovascular disease, and epidemiological evidence points to this trend emerging in the less developed world. This is particularly true in countries in “transition” or middle-income countries, such as in South America and Asia. Already more than half of all cancer cases occur in developing countries.

Lung cancer kills more people than any other cancer – a trend that is expected to continue until 2030, unless efforts for global tobacco control are greatly intensified. Some cancers are more common in developed countries: prostate, breast and colon. Liver, stomach and cervical cancer are more common in developing countries.

A number of common risk factors have been linked to the development of cancer: an unhealthy lifestyle (including tobacco and alcohol use, inadequate diet, physical inactivity), and exposure to occupational (e.g. asbestos) or environmental carcinogens, (e.g. indoor air pollution), radiation (e.g. ultraviolet and ionizing radiation), and some infections (such as hepatitis B or human papilloma virus infection).

Key risk factors for cancer that have been identified are:

  • tobacco use – responsible for 1.8 million cancer deaths per year (60% of these deaths occur in low- and middle-income countries);
  • being overweight, obese or physically inactive – together responsible for 274 000 cancer deaths per year;
  • harmful alcohol use – responsible for 351 000 cancer deaths per year;
  • sexually transmitted human papilloma virus (HPV) infection – responsible for 235 000 cancer deaths per year; and
  • occupational carcinogens – responsible for at least 152 000 cancer deaths per year.

Cancer prevention is an essential component of all cancer control plans because about 40% of all cancer deaths can be prevented.


The Centers for Disease Control and Prevention (CDC) provided the following statistics on cancer prevalence in the United States:

Cancer is the second leading cause of death in the United States, exceeded only by heart disease. In 2007, more than 562,000 people died of cancer, and more than 1.45 million people had a diagnosis of cancer, according to United States Cancer Statistics: 1999–2007 Cancer Incidence and Mortality Data.

The cost of cancer extends beyond the number of lives lost and new diagnoses each year. Cancer survivors, as well as their family members, friends, and caregivers, may face physical, emotional, social, and spiritual challenges as a result of their cancer diagnosis and treatment. The financial costs of cancer also are overwhelming. According to the National Institutes of Health, cancer cost the United States an estimated $263.8 billion in medical costs and lost productivity in 2010.

Racial and Ethnic Differences

Cancer can affect men and women of all ages, races, and ethnicities, but it does not affect all groups equally. For example, African Americans are more likely to die of cancer than people of any other race or ethnicity. In 2007, the age-adjusted death rate per 100,000 people for all types of cancer combined was 216 for African Americans, 177 for whites, 119 for American Indians/Alaska Natives, 117 for Hispanics, and 108 for Asians/Pacific Islanders.

Effective Cancer Prevention Measures

Opportunities exist to reduce cancer risk and prevent some cancers. Cancer risk can be reduced by avoiding tobacco, limiting alcohol use, limiting exposure to ultraviolet rays from the sun and tanning beds, eating a diet rich in fruits and vegetables, maintaining a healthy weight, being physically active, and seeking regular medical care.

Research shows that screening for cervical and colorectal cancer at recommended intervals can prevent these diseases by finding lesions that can be treated before they become cancerous. Screening also can help find cervical, colorectal, and breast cancers at an early, treatable stage. Vaccines also can reduce cancer risk.

The human papilloma virus (HPV) vaccine helps prevent some cervical, vaginal, and vulvar cancers. The hepatitis B vaccine can reduce liver cancer risk. Making cancer screening, information, and referral services available and accessible to all Americans can reduce cancer incidence and deaths.

Where You Live Matters

The following looks at Cancer Death Rates (2007) for each of the states. The death rates found in various states may simply reflect differences in the number of deaths by ethnicity reported earlier. However, explaining death rates in terms of ethnicity per se is a lot more complicated involving personal habits of diet and exercise, access to effective cancer treatment and health care, exposure to carcinogins, and differential genetic make-up, attitudes toward disease prevention, and tobacco use.

U.S.Cancer Death Rates,* 2007

127.9–170.7 171.1–180.7 181.0–191.9 193.3–213.7
Arizona Iowa Alaska Alabama
California Kansas Georgia Arkansas
Colorado Maryland Illinois Delaware
Connecticut Massachusetts Maine District of Columbia
Florida Montana Michigan Indiana
Hawaii Nebraska Missouri Kentucky
Idaho New Jersey Nevada Louisiana
Minnesota Oregon New Hampshire Mississippi
New Mexico Rhode Island North Carolina Ohio
New York South Dakota Pennsylvania Oklahoma
North Dakota Washington South Carolina Tennessee
Texas Wisconsin Vermont West Virginia
Utah Wyoming Virginia

* Rates are per 100,000 people and are age-adjusted to the 2000 U.S. standard population. Incidence rates are for about 99% of the U.S. population; death rates are for 100% of the U.S. population.

Source: United States Cancer Statistics: 1999–2007 Cancer Incidence and Mortality Data, available at http://www.cgc.gov/uscs.

Section 2

Angela’s Concept

Many times in the past I read of some promising new cure for cancer by the medical or scientific community.  When the public reads such articles relating to the “cause(s)” of cancer or some special “new technique” of treatment, there is always an emotional reaction and the hope that maybe this time a real cure for this devastating disease has at last been found. Too many times in the past the media would blow any new ideas on causation or treatment all out of proportion. Reality would soon take hold again, and in a heartbeat the public would once again get its hopes dashed.

So why am I so enthusiastic, and not just reserved, scientifically conservative, and cautiously optimistic this time? Everything in my gut tells me this time it may be for real. Am I’m only reacting to all this emotionally, or do you feel something important is occurring that warrants further consideration? Either way, please read on.

This is what Angela, doing a first class piece of research, came up with:

She basically created in the laboratory a nanoparticle that kills cancer. The nanoparticle is delivered to tumors via the drug salinomycin where it kills cancer cells and deposits gold and iron-oxide materials to help with MRI imaging.

The key word to remember is nanoparticle. Angela’s project was named, “Design of Image-guided, Photo-thermal Controlled Drug Releasing Multifunctional Nanosystem for the Treatment of Cancer Stem Cells.” It was apparently as complex, thorough, and revolutionary as it sounds.

Zhang’s achievement is impressive due to the level of understanding required to create such a nanoparticle in the first place and also because she is only 17 years old. She had spent over 1,000 hours since 2009 researching and developing the particle, and wants to go on to study chemical engineering, biomedical engineering, or physics. Her dream job is to be a research professor. Because cancer stem cells are so resistant to many forms of cancer treatment, Angela felt that this was an area worth focusing on. Her nanoparticle is award-winning due to the fact it has the potential to overcome cancer resistance while providing the ability for doctors to monitor the effects of the treatment using existing imaging techniques.

More specifically, Zhang developed a nanoparticle that can be delivered to the actual site of a tumor. Once there it kills the cancer stem cells. However, Zhang went further and included both gold and iron-oxide components, which allow for non-invasive imaging of the site through MRI and Photoacoustics. What makes this innovative approach so important is that normally cancer stem cells are very resistant to many forms of cancer treatment.

This can be a little difficult for non-scientists to understand, so I’ll do the best I can to explain her ideas and keep it simple. Angela’s basic idea was to mix cancer medicine in a polymer that would attach to nanoparticles. The nanoparticles in turn would then fasten themselves to cancer cells and show up on an MRI allowing doctors to know exactly where tumors are. An infrared light aimed at the tumors would then melt the polymer and release the medicine, killing the cancer cells while leaving healthy cells unharmed. When tested on mice the tumors almost completely disappeared. Although it will be years before scientists will be able to run tests on humans, the results do seem very promising.


I needed to understand some of the terminology myself particularly with reference to two important questions: What is a nanoparticle and what is a polymer?

What is a Nanoparticle?

A nanoparticle is an ultra fine unit with dimensions measured in nanometres (nm; billionths of a metre). Nanoparticles possess unique physical properties such as very large surface areas and can be classified as hard or soft. They exist naturally in the environment and are produced as a result of human activities. Owing to their submicroscopic size, they have unique material characteristics, and manufactured nanoparticles may find practical applications in a variety of areas, including medicine, engineering, catalysis, and environmental remediation. Examples of naturally occurring nanoparticles include terpenes released from trees and materials emitted in smoke from volcanic eruptions and fires. Quantum dots and nanoscale zero-valent iron are examples of manufactured nanoparticles.

What is a Polymer?

Polymers are made up of many molecules all strung together to form really long chains (and sometimes more complicated structures, too).

What makes polymers so interesting is that how they act depends on what kinds of molecules they’re made up of and how they’re put together. The properties of anything made out of polymers really reflect what’s going on at the ultra-tiny (molecular) level. So, things that are made of polymers look, feel, and act depending on how their atoms and molecules are connected. Some polymers are rubbery, like a bouncy ball, some are sticky and gooey, and some are hard and tough, like a skateboard.

Advances in polymer science have led to the development of several novel drug-delivery systems. A proper consideration of surface and bulk properties can aid in the designing of polymers for various drug-delivery applications. Biodegradable polymers find widespread use in drug delivery as they can be degraded to non-toxic monomers inside the body.

Novel supramolecular structures based on polyethylene oxide copolymers and dendrimers are being intensively researched for delivery of genes and macromolecules. Hydrogels that can respond to a variety of physical, chemical and biological stimuli hold enormous potential for design of closed-loop drug-delivery systems. Design and synthesis of novel combinations of polymers will expand the scope of new drug-delivery systems in the future.

Section 3

A Bright Future Ahead for Cancer Diagnosis and Treatment

The upshot of this Blog is to report that the future of cancer diagnosis and treatment looks very bright and promising. The thrust of this article is really about nanotechnology in medicine. The use of nanotechnology in medicine offers some exciting possibilities. Some techniques are only imagined, while others are at various stages of testing, or actually being used today.

Nanotechnology in medicine involves various applications of nanoparticles that are currently under development. Long term research involves the use of manufactured nano-robots. Their purpose is to make repairs at the cellular level (How exciting is that idea!).

// // Whatever you call it, the use of nanotechnology in the field of medicine could revolutionize the way we detect and treat damage to the human body and disease in the future, and many techniques only imagined a few years ago are making remarkable progress towards becoming realities.

Nanotechnology in Medicine Application: Drug Delivery

As I said earlier, one application of nanotechnology in medicine currently being developed involves employing nanoparticles to deliver drugs, but also heat, light or other substances to specific types of cells (such as cancer cells). Particles are engineered so that they are attracted to diseased cells which allow direct treatment of those cells. This technique reduces damage to healthy cells in the body and allows for earlier detection of disease.

// // Tests are in progress for targeted delivery of chemotherapy drugs and their final approval for their use with cancer patients is pending, as explained on CytImmune Science’s website. CytImmune has published the preliminary results of a Phase I Clinical Trial of their first targeted chemotherapy drug. For example, nanoparticles that deliver chemotherapy drugs directly to cancer.

Many researchers attach ethylene glycol molecules to nanoparticles that deliver therapeutic drugs to cancer tumors. The ethylene glycol molecules stop white blood cells from recognizing the nanoparticles as foreign materials, allowing them to circulate in the blood stream long enough to attach to cancer tumors. However, researchers at the University of California, San Diego believe that they can increase the time nanoparticles can circulate in the blood stream. They are coating nanoparticles containing therapeutic drugs with membranes from red blood cells and have shown that these nanoparticles will circulate in a mouse’s blood stream for almost two days, instead of the few hours observed for nanoparticles using ethylene glycol molecules.

Researchers are also continuing to look for more effective methods to target nanoparticles carrying therapeutic drugs directly to diseased cells. For example scientists are MIT have demonstrated increased levels of drug delivery to tumors by using two types of nanoparticles. The first type of nanoparticle locates the cancer tumor and the second type of nanoparticle (carrying the therapeutic drugs) homes in on a signal generated by the first type of nanoparticle (I thought this was brilliant).

If you hate getting shots, you’ll be glad to hear that oral administration for drugs that are currently delivered by injection may be possible in many cases. The drug is encapsulated in a nanoparticle which helps it pass through the stomach to deliver the drug into the bloodstream. There are efforts underway to develop oral administration of several different drugs using a variety of nanoparticles. A company which has progressed to the clinical testing stage with a drug for treating systemic fungal diseases is BioDelivery Sciences, which is using a nanoparticle called a cochleate.

Nanotechnology in Medicine Application: Therapy Techniques

What are some of the applications of nanotechnology related to therapy techniques? The following are some of the greatest scientific therapies currently being worked on:

  • Buckyballs that are used to trap free radicals generated during an allergic reaction and block the inflammation that results from an allergic reaction.
  • Nanoshells may be used to concentrate the heat from infrared light to destroy cancer cells with minimal damage to surrounding healthy cells. Nanospectra Biosciences has developed such a treatment using nanoshells illuminated by an infra laser that has been approved for a pilot trial with human patients.
  • Nanoparticles, when activated by x-rays, generate electrons that cause the destruction of cancer cells to which they have attached themselves. This is intended to be used in place of radiation therapy with much less damage to healthy tissue. Nanobiotix has released preclinical results for this technique.
  • Aluminosilicate nanoparticles can more quickly reduce bleeding in trauma patients by absorbing water, causing blood in a wound to clot quickly. Z-Medica is producing a medical gauze that uses aluminosilicate nanoparticles.
  • Nanofibers can stimulate the production of cartilage in damaged joints.
  • Nanoparticles may be used, when inhaled, to stimulate an immune response to fight respiratory viruses.

Nanotechnology in Medicine Application: Diagnostic and Imaging Techniques

Quantum Dots (qdots) may be used in the future for locating cancer tumors in patients and in the near term for performing diagnostic tests in samples. Invitrogen’s website provides information about qdots that are available for both uses, although at this time the use “in vivo” (in a living creature) is limited to experiments with lab animals. There can be a concern for toxicity based on the material quantum dots are made from. Because of this there is restriction involving the use of quantum dots in human patients. However, work is being done with quantum dots composed of silicon, which is believed to be less toxic than the cadmium contained in many quantum dots.

Iron oxide nanoparticles can also be used to improve MRI images of cancer tumors. The nanoparticle is coated with a peptide that binds to a cancer tumor, once the nanoparticles are attached to the tumor the magnetic property of the iron oxide enhances the images from the Magnetic Resonance Imagining scan.

Nanoparticles can attach to proteins or other molecules, allowing detection of disease indicators in a lab sample at a very early stage. There are several efforts to develop nanoparticle disease detection systems underway. One system being developed by Nanosphere, Inc. uses gold nanoparticles. Nanosphere has clinical study results with their Verigene system involving it’s ability to detect four different nucleic acids, while another system being developed by T2 Biosystems uses magnetic nanoparticles to identify specimens, including proteins, nucleic acids, and other materials.

Gold nanoparticles that have antibodies attached can provide quick diagnosis of the flu virus. When light is directed on a sample containing virus particles and the nanoparticles the amount of light reflected back increases because the nanoparticles cluster around virus particles, allowing a much faster test than those currently used.

Nanotechnology in Medicine Application: Anti-Microbial Techniques

One of the earliest nanomedicine applications was the use of nanocrystalline silver which is as an antimicrobial agent for the treatment of wounds, as discussed on the Nucryst Pharmaceutical website.

A nanoparticle cream has been shown to fight staph infections. The nanoparticles contain nitric oxide gas, which is known to kill bacteria. Studies on mice have shown that using the nanoparticle cream to release nitric oxide gas at the site of staph abscesses significantly reduced the infection.

Burn dressing that is coated with nanocapsules containing antibiotics. If an infection starts the harmful bacteria in the wound causes the nanocapsules to break open, releasing the antibotics. This allows much quicker treatment of an infection and reduces the number of times a dressing has to be changed.

A welcome idea in the early study stages is the elimination of bacterial infections in a patient within minutes, instead of delivering treatment with antibiotics over a period of weeks. You can read about design analysis for the antimicrobial nanorobot used in such treatments in the following article: Microbivors: Artificial Mechanical Phagocytes using Digest and Discharge Protocol.

Nanotechnology in Medicine Application: Cell Repair

Nanorobots could actually be programmed to repair specific diseased cells, functioning in a similar way to antibodies in our natural healing processes. Work is currently being done in a fantastic area of medicine. And that is the use of nanorobots in chromosome repair therapy.


These are exciting times to live in. Twenty years from now many of you reading this Blog may not be alive. But those of us who are older can take comfort in the knowledge that the health and well-being of our children and grandchildren does indeed look very promising. The scientific revolution rolls on, and society will certainly be a beneficiary from all of it.


Read Full Post »

Older Posts »