Wednesday, November 25, 2015

The four sided negation: A tool for overcoming bias in science, atheism and democracy

Towards the end of the 12th century, the Islamist raider Bakthiyar Khilji ransacked the ancient Buddhist university of Nalanda in eastern India. Apart from the many students and monks who lived there, the monastery also housed a large library, similar in extent to the great library of Alexandria in Egypt. After he looted the wealth of the monastery and put to death many monks, Bankthiyar Khilji was asked to decide on what to do with the books in the library.

Khilji  first asked if there was a copy of the Quran in the library. The answer was negative. He then asked to burn all the books in the library. "But valuable knowledge in the books will be lost forever",  warned the caretakers of the library. Khilji famously answered as follows

"If there is a book that says anything against the teachings of the Quran, then it is blasphemous and should be burned. If there is a book that agrees with the teachings of the Quran, then it is superfluous and should also be burned. Please take care that no book is left unburned."

 The Persian historian Minhaj-i-Siraj who was accompanying the raid, recounted that the fires from the embers of the library kept blazing for several days. This action might horrify us today on account of its senselessness and violence, but there is a twisted logic to what Khilji said in defense of burning the books. An echo of this logic can be found in the pronouncement of George W. Bush after the 9-11 attacks on New York City in 2001.

You are either with us or you are against us! 

Bush did not see any shades of gray in the vast canvas of the political landscape of the world. Similarly, Khilji classified knowledge in a strict binary fashion, that which is in agreement with the Quran and that which is not in agreement. He considered Quran itself as all-encompassing in knowledge, so this strict binary classification precipitated his decision. But what was lost in the many books burnt in Nalanda was not only a vast compendium of facts and knowledge, but also a philosophical commentary on the limitations of binary classification of knowledge. Indeed, the most profound examination of these limitations was done by Nagarjuna, a Buddhist scholar who many centuries earlier served as a rector to the Nalanda university, in the third century AD. But what Nagarjuna elaborated on was an even more ancient tradition from India, tracing its roots at least three thousand years deeper into history when it was codified in the Vedas - the holy scriptures of the Hindu religion. According to this tradition, logic is not 2-valued (as commonly understood as true/false), not even 3-valued (as commonly understood to denote shades of gray), but is 4-valued. This 4-valued logic is known in Sanskrit as Chatuskoti. For a long period of human history, this logic reigned supreme in Asia. Eminent philosophers and logicians in China, Japan and southeast Asia have painstakingly translated the Buddhist scriptures into their native languages, the most cherished ones being the commentaries written by Nagarjuna.

I think it is sad that this ancient logic is lost today in practice. Due to its association with religious and spiritual discourse, this 4-valued logic is sometimes considered unscientific, but nothing can be farther from the truth. In fact, the most important applications of this logic have been in objective fields. In this blog, I will argue that considering 4 sides to a negation, instead of the usual 2 sides, is a necessary requirement for both scientific study and political discourse. Essentially, negation is the most elementary operation in language. We cannot think without using negation on different concepts and ideas. But every time we apply negation, we are introducing a strong bias into our thinking, which will eventually lead us to severe errors and paradoxes. We will examine this further, while taking some interesting detours in history.

Introduction to Catuskoti or the 4-sided negation: 

Figure 1: Venn diagram illustrating binary logic 

Let's consider a simple proposition "Cheetah is the fastest animal" and denote this by the symbol A. In regular logic, this proposition is either true or false. When we think of a proposition, we usually visualize it in the mind's eye on a straight line-segment: akin to a barometer of falsehood. On one end lies truth and on the other end lies falsehood.  When we are certain of the truth value of a proposition, this line-segment dissolves into two tiny dots, standing apart from each other, which we can call "A" and "Not A". However, this visualization is incorrect.

Instead, what we need to visualize is the  Venn diagram shown in Fig 1. When we frame a proposition, we create an island "A" in a vast sea of possibilities, known as the universe Ω. When we use strict binary logic, we associate "Not A" to everything in this Ω except the island denoted by A. Depending on how many words we use to describe A and how clearly we describe it, our knowledge of this island increases. But the vast sea surrounding this island can be still a mystery to us. For example, to elaborate on our example, a Cheetah can be described as a wild cat with a yellow fur coat and spots, that it is native to Africa and that it hunts gazelles. To go to the extreme, we may describe it by giving the entire genome of Cheetah in the code of DNA. But what exactly is "Not A", what differences to the genome of Cheetah will classify it as a Non-Cheetah ?

Figure 2: Venn diagram illustrating fuzzy or probabilistic logic 

As our knowledge about other animals increases, we can measure our understanding by creating an island of ambiguity "A and Not A" that is encroaching into the island of "A" in the Venn diagram. This island measures the ambiguity of the proposition, inherent to the finite length of description of A.  As we conduct experiments into how fast Cheetahs run as opposed to other animals, we can measure the probability of A being true. Ideally, this island of ambiguity shrinks and we get back to our binary world.

But in reality, we still have no complete knowledge of the universe Ω. What exactly is an animal ? How many other animals do we know ? What exactly do we mean by "fast" ? Can bacteria floating in rapid ocean currents be considered animals ? This is a case we have not considered when we framed the proposition A. Unlike the island of ambiguity, this uncertainty is not related to A but to the universe Ω. Specifically, it refers to the limitations of our understanding when we say "Not A", its overlap with Ω is not as complete as we imagine it to be. If we correct for this, we obtain a visualization of the 4-valued negation as follows.

Figure 3: Venn diagram illustrating Chatuskoti, or the 4-sided negation. 

This 4-sided negation creates four distinct zones in the universe Ω . With increasing knowledge,  our understanding of all these 4 zones increases, but the zone of "Neither A nor Not A" may remain forever. This is because we may predict that we don't know something about the universe, but we may not be able to express exactly what it is that we don't know.  Please note that this limitation is due to two things: (1) our understanding of the world and (2) the limited expressibility of language. Even if we allow humans to be omniscient (in whatever finite context for Ω  specified by our problem), the ambiguity in the language may never be resolved. This is a mathematical fact as well as a philosophical realization,  that is achieved very recently in Western philosophy. We can refer to the works of Bertrand Russel and Ludwig Wittgenstein, as well as the mathematician Kurt Godel. In contrast to the west, these  limitations were studied from the very beginning in Indian philosophy and logic.

 In the traditional Indian philosophical system of Vaiseshika, the universe is supposed to consist of many tiny atoms. In the Venn diagram above, this translates into Ω  being composed of many (possibly infinite) tiny discrete samples. When we use negation, it is like cutting the above Venn diagram with a knife. Our knife may not be sharp enough to cut precisely through the different atoms. Thus, the 4-sided negation is a fundamental tool in logic that acknowledges this limitation.

Unlike the island of ambiguity given by "A and Not A" which is addressed by fuzzy or probabilistic logic, the 4 zones of Catuskoti cannot be reduced to a single line segment between 0 and 1. Indeed, such a reduction is topologically impossible. What we need is a different mathematical construction. I am not aware if there is anything that fits to the bill.

I find it useful to employ an additional tool along with the traditional line-segment between 0 and 1, with which to measure probabilities. I suggest using a circle, to which the entire universe Ω  is projected. This circle can be visualized as roughly centered at the island "A". At each point on the circle, we can measure our uncertainty in predicting the world towards that direction. Unlike the probabilities, this uncertainty (ignorance) can be unlimited, but it can still be characterized in some way. For example, we can say that our ignorance is greater in certain directions than in others. The metaphor of a circle is also used in defining what a "well-rounded" education meant. Until a few decades ago, the education of a person was not considered complete until he achieved certain basic knowledge in various diverse fields. Without this fundamental knowledge on all directions, it was considered that any decisions or conclusions made by a person would be faulty. However, with increasing knowledge and diversification of scientific fields, our society has sadly reduced the impetus on well-rounded education. Though this is sad as it is, I think it is a symptom of a deeper problem in logic where the 4-sided negation was forgotten.  So where and how was this innovation in logic forgotten ?

The story of Ramon Llull and how European renaissance missed the 4-sided negation: 

Ramon Llull was a Catalan philosopher in the 13th century who was critically important for the historical development of European renaissance. He has traveled extensively and spent a significant amount of time in North Africa, conversing with Arabs in the Moorish centres of learning. During his time (roughly termed the middle ages), Europe has sunk into the depths of ignorance, with the classical discoveries of Greece and Rome forgotten. In contrast, Arab civilization was in full flourish, where the Arab mathematicians and astronomers have compiled and built upon the knowledge from multiple lands ranging from Greece to India and China. It was the contact with Arabs, in southern Spain and Italy, that sparked the fire of European scholarship. Ramon Llull was a critical piece of this puzzle.

Arab astronomers were interested not only in mapping the course of the stars and the planets, but also in understanding how this connects to the human psychology. These esoteric sciences of astrology, alchemy and parapsychology are shunned today as superstitions, but in earlier periods, all scientific investigation was done at the behest of such esoteric subjects. The fundamental goal of these scientists was to liberate human condition from suffering and achieve harmony with the universe, the objective study of natural phenomena was only a means to this end. The harbinger of modern science, Issac Newton, was himself a practitioner of alchemy. With Ramon Llull, these lines were blurred even further. The Arab astrologers were developing upon the ancient science of Indian astronomy and astrology (known as jyothisha). In ancient Indian mythology, the circle denoted the universe and the square denoted the human understanding of this universe. The Vedic sacrifice required mathematical understanding of how to convert a circle into a square. These esoteric ideas spurred a lot of mathematical development - arithmetic, algebra, trigonometric functions, infinite series and so on. Many of these ideas reached Europe via an Arab translation of an Indian text called  Brahmaguptasiddhanta, (translated into Arabic as Zij-al-Sindhind by the astronomer Muhammad al-Fazari). The Arab astrologers codified these ideas into a mechanical system that makes the right predictions irrespective of who is using the system. In one form, this mechanical system became a device known as a Zairja - the first human imagination of an automaton.

Ramon Llull was deeply inspired by the Zairja and created an esoteric art of symbolism that he termed Ars Magna. This was supposed to represent all the aspects of the universe, as well as all the aspects of God. By representing all the possible aspects of knowledge in a cartographic manner, he created the first representation of a "well-rounded" knowledge. He wanted to develop a mathematical automaton for debating with the Muslims, in order to win them over and convert them to Christianity. The legend goes that he was pelted with stones in Tunis when he announced that his Ars Magna could not be understood unless the person  accepted Christ as the savior. After his death, Llull was beatified by the church, but his most sincere disciple was burnt at stake. This was Giardano Bruno, who developed upon Ars Magna to denote all the cosmological concepts and who had come to the conclusion that sun was at the centre of the universe (and not the earth, as mentioned in the bible). Galileo escaped a similar fate from the church by recanting his sayings, although he could see the motion of the planets in his telescopes. This uneasy relationship with astrological and esoteric sciences persisted in all Abrahamic faiths, with practitioners often being accused of worshipping the devil. This is inevitable, because these sciences stemmed from pagan cultures like Greece and India.

The influence of Llull extended beyond astronomy. Leibnitz made his dissertation about Llull's art, and this greatly influenced his future pursuits: calculus and logicism in philosophy. He called this Ars Combinatoria.  Along with Newton, Leibnitz was the fundamental mover of the European renaissance in mathematics and physics. Many mathematicians tried to follow in Leibnitz's footsteps, to ultimately realize a consistent and mechanical system that can predict everything. The latest expression of this dream was by Hilbert, who tried to reduce all mathematics to a set of axioms. However, this adventure was cut short by Godel, who proved that inconsistencies inevitably will creep up in any system of axioms if it tries to be complete (describe everything). This crazy pursuit in history has been described aptly by the computer scientist Gregory Chaitin (who along with Vladimir Kolmogorov is reckoned as the founder of the field of algorithmic information theory). To this day, the field of knowledge representation in computer systems, ranging from databases to the internet, is indebted to Ramon Llull and the Zairja of Arab astrologers. The amazing thing is that this historical line of development has completely missed the Catuskoti or the 4-sided negation in logic. It is a mystery why this has not reached Arab scholarship from India, and from there to Europe. It is unlikely that all the knowledge disappeared with the burning of the Nalanda library.

If Llull was aware of the Indian tradition of logic, he wouldn't have expressed that the people of Tunis needed to convert to Christianity to understand his Ars Magna.  He would know that Ars Magna will forever be a work in progress, and forever be deficient of its goals. This was ultimately expressed in late 1800s by Emil-du-Bois Raymond as "Ignoramus et ignoramibus", and at that time, he was soundly ridiculed for saying this.

The history of western logic had another brief brush and fascination with India. The English mathematician George Boole was deeply inspired by Indian philsophy. His wife Mary Everest Boole, a significant mathematician of her own right, was the bridge to India. Her uncle Everest was a great adventurer in India and gave the eponymous mountain its name. Mary Boole also was in correspondence with Jagadish Chandra Bose, who was a noted Indian scientist at that time. She wanted him to study the efforts of her husband, but this letter was left unanswered by Bose. George Boole developed an elaborate system of logical inference by using the principles of algebra, which he wanted to apply for esoteric ends (salvation, transcendence etc.) Nobody at that time believed that this system would have any practical use. This changed dramatically when Claude Shannon used Boolean logic as the basis for his mathematical theory of information. It is an open question if the development of information theory would have taken a different course if Boole incorporated the 4-sided negation into his logic. However, this did not happen and computer systems today are mostly restricted to 2-sided or 3-sided logic (Infinite-valued logic, like fuzzy systems, are actually extensions of 3-sided logic as illustrated in Figure 2).


Philosophy of Nagarjuna 

In philosophy, western philosophers starting from Kant, have expressed doubts about the limitations of spoken knowledge. Russel and Wittgenstein have systematically characterized the various dilemmas and paradoxes that arise from these limitations of expression. In Indian philosophy, this awareness of these limitations was already fundamental due to the Catuskoti logic. The onus in Indian philosophy was not about finding these limitations, but about achieving spiritual transcendence by overcoming these limitations, even if it is not possible to express them in spoken language.

Historically, Indian philosophy was divided into 9 streams or viewpoints (Darshana in Sanskrit). Of these 9 streams, 6 are in support of the existence of a conscious self (Samkhya, Nyaya, Vaiseshika, Yoga, Mimamsa, Vedanta) and 3 are in opposition to the existence of self (Buddhism, Jainism, Charvaka philosophy). But all these systems germinate from the same root and stem, which is the Samkhya tradition, which provides the mythological imagery and symbols to argue different concepts. I have earlier written an introduction on this blog to the Samkhya system (meaning enumeration), clothing traditional philsophical terms in a modern garb of vocabulary, so that we can better understand them.  Mythological images can better express notions of paradox than linguistic symbols and logic. So many Indian religions have explored this route, developing a plethora of symbols and stories. The idea is that transcendence could be achieved through devotion, or love, or following the right action, without the necessity of knowledge expressible in the spoken form.

However, the path of knowledge was not altogether absent. Philosophical systems using logical inference and deduction were developed first by the Nyaya and Vaiseshika traditions, which are based on the Catuskoti. In this way, knowing these traditions (or viewpoints) was considered essential for greater philosophical study in other systems. The first significant challenge to these systems was given by Gautama Buddha. The Jain tradition also has an elaborately developed logic, criticizing and thus extending the earlier Nyaya systems. Probably, the most extensive development of logical inference for transcendental goals was done by the Buddhist philosopher Nagarjuna in the 3rd century AD. The Australian philosopher Graham Priest, who specializes in dialethist logic (or para-consistent logic) wrote a great introduction to Nagarjuna's philosophy, from a western perspective. He believes that the vision of Nagarjuna was greater than even Wittegenestein and Russel, as he observed greater paradoxes.

I will try to give a short introduction to Nagarjuna, but before we go there, we should know that he remains a profound influence in many Buddhist countries. His Madhyamika tradition (or middle-way) has deeply influenced the development in China and Japan. For example, when we speak of  Japanese koans (or riddles), they stem from a tradition that goes back to Nagarjuna. To get a more thorough overview of this philosophy, one should consult these living traditions. What I can give in this blog is a very brief glimpse. In India, the influence of Nagarjuna had been profound on Shankara, the philosopher who popularized the Advaita tradition in Hinduism. It is Shankara who is probably the closest link to existing religious practices in India and its ancient philosophic tradition. I believe that without understanding Nagarjuna we cannot decipher that link. I am not capable, in any measure, to do undo these mysteries. But I think I can give a hint.

Recall that we represented the Catuskoti using a Venn diagram with four zones, for studying an objective question such as "Is Cheetah the fastest animal ?". But any such objective study is limited to the very ground on which we observe the different phenomena. In other words, we are restricted to the 2D plane of the Venn diagram, although the real universe may have many more dimensions. The computer scientist Alan Kay once gave a good analogy to this situation. We can imagine that we are in the crevice of a deep gorge, like the grand canyon in USA, which is shaped by several eons of erosion and landscaping. By standing in this gorge, we can observe the many layers of sedimentary rock that reflect the movements of history. But our vision will be restricted by these rocks in the gorge. Very rarely we take a fleeting glimpse at the sky and wonder what may lie beyond. Alan Kay calls such a glimpse a "Kerpow", a punch that pushes oneself beyond the dimensions of the world they are living in. In Buddhist and Advaita philosophy, salvation or transcendence is considered to be obtained  by a series of such "kerpows".

Firstly, we do not even know the topology of the 2D plane of reality that we showed in the Venn diagram. Buddhist and Hindu philosophy considers this as "Samsara" - a cyclical world where we keep retracing the same steps. I think the Dutch artist Escher gave a brilliant visualization of such a world, with ants walking along a Möbius strip.


Figure 4: Ants walking around a Möbius strip - illustrating the Buddhist Samsara 

Nagarjuna considered the problem of logic in such a world. A rationalist person is like an ant walking around this Möbius strip of logical constructs (zones of Catuskoti). In order to escape from this restrictive world, Nagarjuna developed a 9-valued logic. The first four zones correspond to one-side of the plane, as illustrated in Figure 3. The next four zones correspond to the other side of the plane, an inversion of the previous four zones. Nagarjuna said that an inquisitive and philosophical mind will not be satisifed with either of these 8 zones and seek a 9th zone, which is an escape from the Möbius strip, into the three-dimensional space. Obviously, this is a procedure that has to be repeated several times, as we approach concepts of greater logical ambiguity. In Buddhist mythology, this is akin to climbing up the Meru mountain at the top of which lies salvation or Nirvana.

A very similar rhetorical exercise in negation is undertaken in the religion of Advaita. A follower of this tradition performs negation in a series of steps called "Neti Neti" ("not this, not that" in Sanskrit).

As we can see, Buddhist and Hindu philosophy graduated from 4-sided logic that is suitable for objective analysis to more complex 5-sided or 9-sided logics suitable for spiritual investigation. However, this gave a bad image for 4-sided logic, which got confused with esoteric and spiritual agenda. I believe a proper application of 4-sided negation to objective scientific fields will give fruitful dividends, especially in situations where our ignorance of the field is immense but predictable.

How to overcome bias in atheism and democracy: 

Bias in science is dangerous, but bias in politics has immediate consequences for humankind. Even if people don't have a thirst for knowledge, they need to survive through their daily struggles. In this way,  if our political discourse has serious limitations related to the logic we deploy, we will face the trauma of misunderstanding and violence.  Unfortunately, the majority of people who have active political opinions today use 2-sided or 3-sided logic in their heads. That means, they do not understand the 4th zone (neither yes nor no) that is reflective of their relative ignorance of the subject.

A faulty language in debate is like a bad posture of the human body - when used repetitively, it results in pain. A great analogy is with the case of vertigo that happens due to tiny calcium crystals that accumulate in the inner ear, destabilizing the balance of proprioception. People who suffer from this vertigo feel that their world is constantly spinning. Needless to say, this is a nightmare condition. A cure for this vertigo has been found very recently by an American doctor named John Epley. Using a 3D cast of the human ear's anatomy, he devised a series of motions (called the Epley maneuver) that moves these particles from the troublesome spot in the ear to a non-bothersome spot. The relief felt by the patient is dramatic and immediate, almost leading them to suspect of black magic. I think that heated political debates will benefit from the 4-sided negation in a similar manner.  By understanding how our language and words are framed, and how limited they are in expressing certain ideas, we can overcome our ignorance and conceit.

Of all the debates in politics, I think nothing is as absurd as the debate on religious freedom.  This is because nobody wants to really understand the other side in a religious debate. The famous (notorious) atheist Richard Dawkins recently put this as "If I want to repudiate Leprechauns, I should not be expected to understand Leprechaun ideology". This laziness is understandable. However, what is not excusable is the reluctance to see one's own ignorance. By using 4-sided logic, instead of the more intuitive but faulty 2-sided logic, we will develop a habit of systematically understanding where our ignorance lies.

In France, there is currently a heated debate about the limits of religion. Owing to its historical revolutionary spirit against the catholic church and monarchy, French political life is very secular. However, this secularism comes from the pick-axes and shovel-fights of the 17th century, where all the debates were fought using the 2-sided negation. The secularist in France is highly confident in himself about how right he is. In this way, he is not much different from a rabid religious fundamentalist. The problem here is not with the ideology, but with the 2-sided negation.

Wise people of many cultures, including religious people, have learnt to be conscious of the limitations of their knowledge. In India, where many cultures and religions intermingled in history, people have high tolerance for  ambiguity. A person may be Hindu, but pray in a church, or seek blessings from a Muslim saint. A musician may be a very deeply religious Muslim, but still offer prayer to Saraswati (the Hindu goddess of learning). A good example was Ustad Bismillah Khan, a gifted Shehnai player. There were thousands of examples of Hindu and Muslim saints who cherished the company and customs of other religions. My favorite example was Kabir - a devoted Muslim poet who sang praises of Rama. The evidence of this cosmopolitan culture in India is present in all religions, including Christianity and Islam. All these religions are integral parts of the rich tapestry of Indian culture today.

There has recently been an opinion piece in New York Times, comparing the state of secularism in India and France. I find this essay to be a nice illustration of the problems with a 2-sided secularist outlook. France had an awful history of religious discrimination against Jews, Protestants and now against Muslims. By comparison, the history of India is a miracle, with very few incidents of religious violence for such a large population. In my opinion, the reason why India achieved this is not because of greater scientific knowledge or better legal rights, but because of a cultural assimilation of the 4-sided negation, which allows for greater tolerance in debate. In contrast, due to its reliance on 2-sided black and white logic, France is at a greater danger of discriminating against its own citizens.

However, India is not an idyll of peace. Its traditional tolerance is today under great threat. The urge for purity and austerity, as made notorious by the Islamist fundamentalists, is encroaching every religion. For example, in India, the Hindutva brigade is creating a cartoon version of Hinduism where the 4-sided logic is truncated into a 2-sided one. This desire for bifurcating everything in this world into two distinct boxes of good and bad, or pure and evil, is a self-destroying engine. Happiness and salvation in this world lie in a tolerance for ambiguity and in a consciousness of our limitations. This is as true of atheists as of religious people, of whatever religious stripes.

The works of Shakespeare are considered a turning point in western literature, where the internal dilemmas and mental confusion of characters are shared directly on stage. A human being is thus presented as a process in the making, and not as a finished product. The most famous example is the soliloquy of Hamlet "To be, or not to be".  This introspection of our mental processes is essential for us to be a better person. However, the story of Hamlet  was a tragic one. Hamlet succumbs to the guilt of murder. However, if Shakespeare used the 4-sided negation in the soliloquy of Hamlet, following the lead of Nagarjuna, he would have asked

"To be, or not to be"
"Or to be and not to be"
"Or neither to be nor not to be"
"Is there still a question ?"

A dramatic gesture would be to make Hamlet nod his head using the Indian head wobble. I will end this blog with this funny note. I believe that Nagarjuna's philosophy is responsible for this gesture, that stumps many foreigners who visit India, who cannot understand whether their Indian friend is saying yes or no. For greater clarification, they may ask,

"Do you mean yes ?" 
"Or no ?"
"Or yes and no ?"
"Or neither yes nor no ?"

"Or did you just reach nirvana ?" (This is an extremely rare case) 

Addendum: Representation of Chatuskoti using Predicate logic


After discussing with Joscha Bach (http://palmstroem.blogspot.com) on Twitter, I realized that Chatuskoti can be implemented in predicate logic using 3 predicates: A, Unknown(A) and Unknown(Not A). When these 3 predicates are treated as binary variables, they give rise to 8 states. More interesting things can be represented when these predicates are treated as continuous variables, able to take any value between [0,1]. The space spanned by these 3 predicates can be visualized as a cube standing on 3 axes. I propose to call this Gotama's cube (referring to the ancient Indian logician who developed Nyaya Sutras). The 8 states appear as corners of this cube, which can be enumerated as follows: 

0) Unknown Not A, Unknown A, Not A
1) Unknown Not A, Unknown A, A
2) Unknown Not A, Known A, Not A
3) Unknown Not A, Known A, A 
4) Known Not A, Unknown A, Not A 
5) Known Not A, Unknown A, A 
6) Known Not A, Known A, Not A 
7) Known Not A, Known A, A 

Of these 8 states, the states (4,5) are indistinguishable. So for practical purposes, they can be considered as 1 state, marked as "Unknown A". Similarly, the states (0,1,2,3) are indistinguishable. So they can be combined into a single state marked "Unknown Not A". With these 4 states, we get back to our traditional representation in the Chatuskoti. The clubbed state "Unknown A" is equivalent to "A and Not A". The clubbed state "Unknown Not A" is equivalent to the state "Neither A nor Not A". 

Why do we need 8 states in predicate logic and not 4 to represent Chatuskoti ? 

This is because Chatuskoti states clearly that only 1, and not any more, of the 4 states can be true: A, Not A, A and Not A, Neither A nor Not A. 

Because of this, we cannot map either of the states (A and Not A) or (Neither A nor Not A) to the predicate "Unknown (A)". This is because the predicate Unknown(A) can be true at the same time as A or Not A. This is the natural combination of predicates in predicate logic. However this is strictly forbidden in Chatuskoti. So the traditional method of handling uncertainty in predicate logic, using the predicate Unknown(A), which by the way is different from fuzzy or probabilistic logic without any additional predicate, is still insufficient to map the space of Chatuskoti. 

However, it is possible to cover all the 4 zones of Chatuskoti with 8 states, given by 3 predicates, by allowing for some redundancy. Please note that this is not the most compact representation of Chatuskoti. In theory, you can have just 2 predicates to map these 4 states. However, this predicate is not the usual "Unknown (A)". 

I am not sure if many people use 3 predicates, specifically an additional predicate "Unknown (Not A)" apart from "Unknown (A)" in computer-based logic and knowledge representation. This is a fairly simple method to adopt, and will potentially give rise to more stable knowledge representations. If you are a researcher in knowledge representation or machine learning, please go ahead and try it. All you need to do is modify your traditional error bars with 2 confidence values instead of one confidence value. :) 

You can also visualize the uncertainty on Gotama's cube : Using the HSV representation, map the hue value to the axis (A-Not A), map the saturation value to Unknown(A), and map the intensity value to Unknown (Not A). The interesting thing with this visualization is that it exposes the aspect that without saturation, you cannot distinguish hue. And without intensity, you can distinguish neither saturation nor hue. But in lighter shades, the colours become better visible. 

Do let me know if you do any such experiments. I will be happy to report them in the comments here. 




Friday, November 13, 2015

Of electric fences and free software: A cybernetic theory of freedom


 Cattle imprisoned by an electric fence: Illustrating a cybernetic conception of freedom 

On privacy and freedom: 

There is a remarkable project under development called the Freedombox, which is trying to put a collection of privacy software for browsing securely on the internet. This is built on top of the Debian stack of free software, and includes the powerful Tor browser and hidden services. I recently saw a great presentation of Freedombox and its features by Sunil Mohan Adapa, who is a good friend of mine from my university days.  After the Snowden revelations about industrial scale spying on the public by NSA and other government security agencies, there has been a greater demand for software that protects the privacy of the users. However, encryption software is typically difficult to use for the general public. This put them out of reach of most public, except for technically sophisticated programmers as well as actual criminals who will be careful to cover their tracks.

Why do the general public deserve privacy on the internet, if they are not doing anything wrong ? The vast majority of people are citizens respectful of law, but they have the right to the human dignity of privacy. As human beings in control of their own lives, they deserve the right to hide their wallets, bank statements, personal photographs, medical records, psychological weaknesses, or browsing histories from people seeking that knowledge. If and when human beings lose their privacy, they also lose agency to control their own lives. Instead, they become equal to cattle which can be herded into flocks and exploited by a master. This master can be a government, a large corporation, or a criminal agency - whoever has the wherewithal (technical and logistical capabilities) to snoop on the unsuspecting public. If humans lose their privacy, they are like stray animals, waiting to be reined in by a harness and branded by the hot iron of the husbandman. If the husbandman is caring, they will be milked or fleeced, but their lives will be spared. If the husbandman is vicious, they will be butchered and served on the table. In either case, they will lose their freedom to roam as they please. So the question of privacy comes down to human dignity. Do we want to treat people as humans or as cattle ?

What sets Freedombox apart from other similar software projects is that it aims to protect novice users, without any programming expertise. This includes people using home computers and smart phones to browse on the internet, share pictures with friends,  or control home appliances using web apps. To target such a general user with minimal programming expertise, any privacy technology should be as simple to use as flipping on a light switch. This is what the Freedombox aspires to be. During his presentation, Sunil demonstrates the installation procedure of Freedombox with a theatrical gesture, connecting the box to the internet cable and to the electric plug (which is all it takes for the installation), saying
This is how we provide freedom, one home at a time  
This is an amazing achievement, but I believe victory is not that easy. For a long time, the bane of free software has been its technical complexity. Using most commercial applications with GNU/Linux software has been hard or even impossible. Many hardware vendors refused to provide drivers for their devices to interoperate with free software systems. If free software is a wide open field of freedom, there existed many fences that daunted people from accessing this freedom. There has never been any doubts about the quality and efficiency of free software systems. In fact, most web servers and large-scale software systems are built on top of free software, but such systems are built by capable engineers who are not daunted by the barriers of integration. However, this situation has been changing in the last few years, spurred by projects such as the Ubuntu, which aim to reach to the general public (known in programming parlance as n00bs). Even compared to the guarded optimism of Ubuntu, what the Freedombox aims for is radical: giving the power of a veteran system administrator to a novice user, all in a box that can be plugged in and switched on. Essentially, the Freedombox destroys the fences, the one single factor that is limiting the spread of free software.

Standing on the shoulders of Skywalkers: 

In order to assess whether this project will succeed, I think we have to go back to the dawn of computing. The seeds of the computing revolution were sown in the decades before the second world war. As the war raged on and everybody was awed by the power of the atom bomb, a greater force was rumbling below - the force of computing machines. It is said that computer science is as much about computers, as astronomy is about telescopes. Computers and algorithms are a powerful tool like no other in human history. What is it that they help us to see ? In my opinion, there were three people who understood computers to the full extent of what they help us to see. I don't include the two giants: Alan Turing and John von Neumann to that list. The three people were: Claude Shannon, Norbert Wiener and Vladimir Kolmogorov. In my mind, I call them with the acronym Skywalkers, giving one syllable for each person's last name (s)ky(w)al(k)ers.

In 1943, just as the embers of the second world war were hot and most mathematical work was still classified, Alan Turing visited Bell Labs in the USA, where he crossed paths with Claude Shannon. He later mused in his diaries about the conversation he had with Shannon.
Shannon wants to feed not just data to a Brain (a nickname Turing gave to a futuristic computing machine), but cultural things ! He wants to play music to it. 
This incident helps me explain why I have chosen the Skywalkers as visionaries for where  and how computation can be applied to understand the world. In my opinion, these three scientists captured the grand vista of everything that can be analyzed in a computational and information theoretic manner: physics, biology, media, sociology, psychology, the very process of thinking itself - essentially, the entire skies. Everything appears in a more profound manner when seen through a computational lens, with all the underlying veins and sinews that make up the fabric of reality glistening in bright contrast.

I remember an incident when I was chatting with a  physicist friend of mine when somehow our conversation came to the hypothetical case where aliens are observing the earth and judging the complexity of its different artifacts. My friend thought that the most complex objects to be found on this planet could probably be the digital computers, with their micro controller chips and complex electronics. But I thought differently, the most complex artifact would be the spoken conversations between humans. There is an inherent complexity in human language that is quite extraordinary. From the perspective of a computer scientist, computers themselves are not that complex. In order to generate complex artifacts, the required computation may happen not only on digital computers, but also on biological ecosystems or in human societies. This complexity of these artifacts can be measured in an objective manner, based on the length of the shortest computer program that can generate such artifacts. We need to thank the Skywalkers for giving us the conceptual tools to do this.

Essentially,  computer science is the study of information, which is everywhere. It is the study of control and feedback, which are everywhere.  It is the study of probabilities and complexity, which pertain to everything. Skywalkers are those giants who captured this vast canvas for computers.

Newton once said that
If I have seen further, it is by standing on the shoulders of giants. 
This quotation goes earlier to the 12th century, to the Bernard of Chartres. Chartres is a central town in France that housed one of the oldest cathedrals. We will come back to this cathedral at the end of this blog. But, we can reframe this quotation to refer to computer science as follows.
If we can learn to stand on the shoulders of skywalkers, we can also walk the skies. 
There have been many great computer scientists (and indeed in other scientific disciplines) who followed the lead of Shannon, Wiener and Kolmogorov. Going with the Star Wars theme, we may think of these scientists as Jedi knights, who can control the Force, which is the underlying information pervading this universe. Unfortunately, despite the spread of personal computers and smartphones, most of humanity cannot think in computational and information theoretic terms. This makes them particularly vulnerable for manipulation.

Behavioral conditioning and cybernetic freedom: 

Of the three Skywalkers, the person who understood the most about how computers will affect humans is Norbert Wiener.  I am currently reading an introductory book for general public that he wrote in the 1950s, titled "The human use of human beings". Reading this book is like walking on a glacier, slow and painful. When I was living in the Alps, I had my share of walking on glaciers, deep crevices lurk beneath your feet and you should be watchful of everything that melts and tumbles into the bowels of the earth below. But after you cross the glacier and climb a peak, you may catch an aspect of the entire glacier as it carves through the landscape. Only then will you get a grasp of its immense physical power. The ideas of Wiener are like such a glacier, cutting through rocks and boulders, across fields as diverse as biology, sociology and physics.

Unlike many of his contemporaries, Wiener realized that there is a dark side for computers, even prompting him to exclaim

I agree with what Einstein used to say about the three bombs: there are three bombs. The first one is the atomic bomb, which disintegrates reality. The second one is the digital or the computer bomb, which destroys the principle of reality itself - not the actual object, and rebuilds it. And finally the third bomb is the demographic one. 
This is a stunning pronouncement made just after the destruction of Hiroshima and Nagasaki. Wiener was a pacifist and an environmentalist, and by wearing these hats, he had no trouble finding people agreeing with him about the first and third bombs. But in his day, talking of computers as a "bomb" that would destroy the very principle of reality was quite unusual. The only people who could probably remotely understand this pronouncement were behavioral psychologists, following the lead of Pavlov. Wiener explains how a machine can perform behavioral conditioning on a living creature, using the example of cattle and electric fences.

On cattle farms, the construction of wire fences strong enough to turn a steer is not easy. It is thus economical to replace a heavy fence of this type by one where one or two relatively thin strands of wire carry a sufficiently high electric voltage to impress upon an animal a quite appreciable shock when the animal short-circuits it by contact with the body. Such a fence may have to resist the pressure of the steer once or twice; but after that, the fence acts, not because it can hold up mechanically under pressure, but because the steer has developed a conditioned reflex which tends to prevent it from coming into contact with the fence at all. Here the original trigger to the reflex is pain; and the withdrawal from pain is vital for the continued life of any animal. The transferred trigger is the sight of the fence. 
Crucially, from the perspective of cybernetics, it does not matter whether the communication is between man and man, or between man and machine, or between animal and machine, or between machine and machine. The purpose of any communication is to establish control. Whether, we as human beings allow ourselves to be behaviorally conditioned by electrical fences (in various metaphorical forms) is up to us. In cybernetic terms, freedom corresponds to the control of such electrical fences that deter humanity from trespassing them. Where are these electrical fences in today's society and who controls them? It is only possible to answer this question, when one is versed in the tools of information processing.

Most people didn't understand Wiener.  John McCarthy, another great hero of mine, who went on to coin the term "artificial intelligence" (and found the scientific field) described of Wiener as a "humongous bore". Along with co-conspirator Marvin Minsky, he basically created the "new" field of artificial intelligence in order to escape the prying clutches of Norbert Wiener and John von Neumann (another giant with a massive ego). Wiener had been a terrible mentor (his mentorship of Mcculloch and Pitts, who invented artificial neural networks, is another terrible story) and he had his own issues of awful childhood, being raised in the public eye as a child prodigy by a disciplinarian father. However, by abandoning Wiener, the fledgeling field of artificial intelligence (AI) got divorced from its cybernetic roots, where machines were compared not only with humans but also with animals. The cognitive machinery of the human brain shares its core architecture with animals, and most of its functions are subconscious. The thoughts expressed in language, driven by conscious desires and motivation are only the tip of the iceberg. But this final superficial layer became the sole focus of AI, and remains so mostly even today. Between McCarthy and Minsky, Minsky gained a closer understanding of Wiener, aided by his careful study of Wiener's protégées McCulloh and Pitts. However, I believe even Minsky held an unhealthy fixation with higher level cognitive processes - those visible under conscious human inspection. There were very few scientists in the computer science community who were studying subconscious mental processes, and building defenses against them for the safety and freedom of humans.

Freedoms in Free and Open-Source Software: 

The plight of AI as it got fixated with higher level cognitive processes is not just an esoteric artifact, worrisome only to scientists and philosophers. In fact, the merry pranksters led by McCarthy and Minsky conquered the entire realm of computing technology. They spearheaded the revolution of personal computing in the 1980s. They were instrumental in defining the world wide web and its myriad applications. Most relevant to this blog, Richard Stallman, the founder of free software movement, graduated out of the MIT AI lab. It is he who defined what "freedom" means with respect to the use of software: the freedom to use, the freedom to inspect the source-code, the freedom to modify it, the freedom to share it and so on. These freedoms are necessary for the human user of a software to be the master of his own control, but they are not sufficient by themselves. True, if these freedoms are absent, whoever controls the software also controls the user using it. But even if all these freedoms are present, the human user can still be under the control of the software provider. This is because of the simple reason that the human user is unaware of his subconscious mental processes. In order to achieve true freedom, what need to be revealed openly (in a human readable format) are not only the software code, but also the hidden mechanics of the human cognitive processes while the user is interacting with the software.

In the earlier days of personal computing, the functionality and specification of software could be given in a small finite description. This functionality was the same for every user testing the software. So in principle, even though the average user is unaware of how the software is gaming his subconscious brain, it is possible that somebody exists with jedi powers of information processing, who can analyze the entire chain of human software interaction. However, this has changed dramatically in the recent years. Nowadays, by accumulating a lot of data, a software system evolves a personalized model of the human user and reacts to him in a unique personalized manner. Consequently, the reaction modalities are unlimited and cannot be tested by other users. This is particularly true of cloud-based software that hides the data collected from the users in a distant database. But even if the raw data is available for inspection, the average user may not have the computational capacity and the education in information processing, to understand how the software is curtailing his freedom.

Beyond free software, there exists the paradigm of open-source software, which does not support all the freedoms listed by Stallman. This paradigm grew out of the work of Linus Torvalds and others, who argued that the freedom of "source-code visibility" is sufficient for most purposes. Many large internet software companies, such as Google and Facebook, are releasing the source-code of certain software projects. This is welcome, but I believe it is still unclear whether this dumping of software source-code is helping the freedoms of users. In fact, large computer systems today use tons of user data, that are locked in the proprietary file systems and databases of these companies. It is more this data, rather than the software source-code itself, that predicts the behavior of a software. Without this data, I do not think that a machine learning algorithm can be considered technically even "open-source". In this way, these public releases of software source-code are short of even the definition of Linus Torvalds. What we have instead is a phenomenon similar to viral infection. Due to the limited size of the ecosystem of programmers working on a particular field, the public releases of software source-code are an attempt to monopolize these ecosystems. What results from this infection is an early stage of technology lock-in, before the technical field is even matured. This is not a new phenomenon, we can argue that Unix and C programming languages are themselves artifacts of technology lock-in. But the consequences of technology lock-in in today's era of big data will be far more severe.

Finally, I want to talk about the tremendous pressure on scientific researchers to release the source-code of the software systems they used for conducting their experiments. I think it is admirable when a scientist releases source-code, but this cannot be imposed. Often, the progress of science depends on personalized investigation and passions of a scientist. The scientific community needs to tolerate dissenters with diverse perspectives, and create avenues to build a common field of study from multiple viewpoints. Often, it is necessary for scientists and research institutes to build credibility by publishing a series of results, before they can release their methods or software. If they release their secret sauce before they achieve due recognition, their unique perspective about research will be submerged in the din of the majority. Essentially, if the software is not even released (freedom 0 - of using the software is not given), there is no question about violating other freedoms of source-code inspection etc. On the other hand, releasing source-code without releasing data is a very sly game.

Education of a new generation of computational citizens: 


We are still in the very early stages of the computing revolution. In terms of impact, the only comparable inventions are that of writing and printing. Both these technologies are revolutions in how information is organized and processed: they tremendously affected all aspects of society: economics, law, health, education and government. Overall, these effects have been resoundingly positive. But the society needed a couple of centuries to absorb the aftershocks of these technologies. The domain that concerns us the most is that of education, as this is where future citizens are forged.

When writing was invented, it facilitated the externalization of memory: helping people to think and reason about concepts and ideas that are too large to fit in a brain. The pessimist may have said that people needed to remember less and this would corrupt the faculty of  memory (such an opinion was voiced by Socrates), but indeed the opposite happened.  New methods for expanding memory, such as the Method of Loci were invented at the same time as writing. This story of the art of memory is recounted beautifully in the book of Joshua Foer "Moonwalking with Einstein". In the long run, writing as a technology created a new way of doing science as well as politics.

Something similar has happened with printing. This greatly expanded the reach of knowledge for ordinary citizens. The pessimist may have said that with so many books being printed, the quality is dwarfed by the sheer quantity, but the quality actually increased. New methods were invented on how to judge the quality of books, and how to quickly scan large passages of text. The writer Nicholas Carr argues in his book "The shallows" that the practice of quiet contemplation while reading a book encouraged other mental processes dealing with higher cognition (engaging the pre-frontal cortex of the brain). The age of enlightenment was a direct result of this expanded consciousness of people, as well as a greater tolerance and empathy for the perspectives of others.

With computing, we simply did not yet experience such a revolution in education. I think Alan Kay, the inventor of object-oriented programming (along with a whole bunch of other ideas) expressed this most clearly. He says that nobody, except in scientific fields, uses computers for what they are good for. Everybody else is using them to mimic old media : books, television, movies etc. What we need to cultivate is a new type of thinking: based on a computational and information theoretic perspective, what I mentioned earlier as "Standing on the shoulders of Skywalkers". A person who is able to think computationally will not need protective barriers that treat him like a child. He will be conscious of his cybernetic freedoms as a fully educated citizen. No powers - either in the government, or in the greedy towers of finance, or in the greedy dungeons of crime - will take him for granted. But to get there, we need new programming languages and new paradigms of computing. We need new methods of sharing data and accessing it computationally across the public. What we need are freedoms far greater than envisioned currently by the movements of free software and open-source.

If the average person is able to program computers and regularly analyzes information through computational lenses, this will produce physical changes in the brain, similar to how the quiet contemplation of reading a book changed the prefrontal cortex. I am optimistic that when this happens en masse, we will build a society that is far more cultivated and compassionate than we have today. In fact, this might be our only salvation for the ecological and environmental catastrophe that we are hurtling into. But we don't know how long this will take. Alan Kay says that this may take a couple of centuries. He likens himself to people laying bricks to construct a cathedral, like the cathedral of Chartres in France, that took several generations to finish. These cathedrals were not only wonders of architecture, but they also housed large libraries where the scholarly works of Arabia, China and India, as well as ancient Greece and Rome, were studied by European students. This sparked the period of renaissance.

I hope we will see a similar computational renaissance in a future not too distant from now.




Alan Kay's interview: Go to 6:20 to get his message on the culture of computing. But watch the entire interview for greater wisdom. 

PS: The title of this post is a reference to John Steinbeck's novel "Of Mice and Men". In this novel, there are two characters George and Lennie, who can be interpreted in many ways. The conscious and self-reflective George refers to the intelligent programmer who has a deep desire for freedom and who wishes good for the society. The kind-hearted brute Lennie can refer to the uneducated citizen, who is given access to the vast powers of computing. He may also refer to computation itself personified. There are easy references to be found for other characters in the novel, each with their own greed and insecurities. The novel has a sad ending - George shoots Lennie in order to save him a more painful death. This is my greatest nightmare. I hope the future does not hold such an end for computing. I hope our generation of humanity will avoid this end.


Sunday, October 25, 2015

Can fasting prevent Alzheimer's disease ? And how to answer such questions

Women of Rennell Islands: Were they spared from Alzheimer's disease due to famine experienced in childhood ? 

Humans love to think of themselves as gods, as bright dazzling balls in the sky, radiating ideas and exchanging pointers on how their ideas can solve all the problems in the world. Nowhere is this conceit more apparent than people blathering on the internet. Mea culpa. But the other day, this conceit has been broken for me, when all of a sudden I woke up with a stiff back. Years of lounging on a work desk have taken their toll on my back muscles and they decided to apply for an early retirement package. Of course, I am not willing to grant them that. But their message has been heard, their point has been taken. As if to reinforce the point, which might be lost in subtlety and in the humdrum of life, the changing seasons also gave me a sore throat and a cold. I am reminded that I am a bag of sticks which is held together by fibers, wires and valves, any of which can be twisted out of shape. I used these sick days of staying at home for reading through a set of books that I purchased recently. As it happens, they have to say nary a bit about this physical condition of human existence.

Humans are pipe-shaped animals, with the biggest pipe - the intestinal pipe going from the mouth to the anus. 
This is how Giulia Enders colorfully puts it in her book "Gut: The inside story of our body's most underrated organ".  This book was a good starting point to learn how viscerally our brains are connected to our viscera. But I finished reading another book, that made an exhaustive deconstruction of the notion that we humans live in a Platonic sphere of abstraction and ideas. That is the book "Village Effect" by Susan Pinker, which forcefully reminds us how our brains are tied to our bodies. I think every computer engineer needs to read this book. I recommend this book especially to my friends in the Silicon Valley, plotting their next big social apps or cyber addictions on the unsuspecting masses. As I mentioned, people are not dazzling gods made of ideas, but inherently tied to their physical bodies., which are like bags of sticks. In fact, human bodies are even more rickety than that analogy suggests.

Firstly, our bodies have their genetic makeup of the DNA code, which we have not deciphered yet, but which predisposes us to certain physical and mental states. Beyond the DNA, our bodies have digital switches triggered by the environment, that turn on or off large sections of genetic code. This is known as epigenetics, a phenomenon that can even be transmitted from parents to children. Beyond this digital circuitry at the cellular level, our bodies have a layer of complicated organic chemistry, known as the endocrine system. Various chemicals secreted into our body at different glands make our hearts beat faster, activate our immune systems or shut them down, make us feel comfortable or cranky, and dramatically color our emotional states. Chemicals such as serotonin, epinephrin and oxytocin dictate how we behave as humans. This is why people get addicted to drugs, and why they can be saved by pharmaceuticals. At a layer higher than chemistry, our body is a network of neurons - not just in the brain but significantly also in the gut. The thoughts that we think, the experiences that we have, and the foods that we digest will all influence which chemicals get secreted into our bodies and which digital switches (epigenetics) get turned on so that our DNA can be expressed. Digital technologies pretend that we are simply a software of consciousness running on the hardware of our brains. But this is far from the truth. Designing a good user interface means taking care of all the layers of being human: including the chemical and genetic layers. Needless to say, addressing this level of complexity has not been attempted by anybody yet, and what we have today are awful user interfaces - both for digital systems and for social systems.

This brings me to another book I purchased "The human use of human beings" by Norbert Wiener.  This book is a classic from 1950s and Wiener is one of the founders of computer science, specifically the theory of cybernetic control that addresses the interface between the human and the machine. I have not got to reading this book yet, but Wiener was a visionary not only for computer theory, but also for the monumental blunders that will be done by digital technologies to humankind. According to Wiener, digital technologies are pretty similar to socio-economic and legal systems: both encode a protocol of communication with humans for making them do what is expected of them. The sad thing is that we were tremendously wrong on designing both socio-economic systems as well as digital technologies, simply because we misunderstood humans to be Platonic agents of ideal action, and not as rickety bags of DNA and chemicals.

Getting back to the "Village Effect" by Susan Pinker, the book is a revelation on many fronts: People with strong social circles, as in a small tightly-knit village, lead long and healthy lives. People without strong social bonds suffer not only from loneliness but also from an early death due to a severe lack of immunity to diseases. Face-to-face conversations and pats on the back boost the oxytocin in our bodies which revitalizes the immune system, something which does not happen with virtual messages. Mobile devices and screens have documented negative effects on the cognitive growth of babies and adolescents. Adolescents using digital media and online social networks are at a greater risk of bullying and social conditioning.  The  thing that stuck me is how unanimous the psychologists are. These are fairly new findings in psychology, but they are not controversial. Like the tobacco industry, the computer industry has been simply ignoring these findings. It considers these as "somebody else's problem", nothing to do about it. After reading the book, I was stuck by the mental image of an Indonesian baby that was recently on the news, who was addicted to cigarettes and sniffing out tobacco smoke like a chimney. This disaster in Indonesia is in no small measure due to the advertising of tobacco companies that saw a remarkable growth of sales in the rising populations of Asia (even as the dangers of tobacco got apparent in the USA). The future generations of humanity will probably judge us as badly, letting the young and vulnerable people get addicted to the charms of social networks and digital pornography.

However, there is a fundamental problem that prevents us from connecting cause and effect in complex fields like psychology.  As mentioned earlier, our bodies are like Rube-Goldberg machines, composed of complex systems of chemicals and DNA, each of which is triggered by non-linear switches. The effects of any cause will become apparent only much later, and can only be gauged in a probabilistic sense. Like my stiff back that resulted from my sedentary lifestyle of several years of  lounging on the chair, the effects of modern living on my mind will be apparent only much later. Typically, such effects will be reduced cognitive skills, reduced social empathy, and reduced immunity to diseases. These effects will probably show up clearly only in my old age, when I will be the most vulnerable.

It is said that scientists working in most fields suffer from a physics envy, eager to condense the topic of their study to a pithy set of equations. Unfortunately, this approach to doing science is not valid for complex subjects that have many variables and non-linearities. Nevertheless, scientists in fields as diverse as biology and sociology have the urge to reduce their subjects to crude one-dimensional models, sometimes with tragic consequences. A similar envy exists in medical fields, though I believe it is less known. I term this the Penicillin envy: every pharmacologist and medical practitioner wants to find a wonder drug that can cure the disease of their study. In fact, this is a very modern jealousy, stemming from the remarkable success of the antibiotic drug Penicillin. It just works, as long as the infection is bacterial and the bacteria are not resistant to this drug (which was typically the case when the drug was first tested). This wonder drug inspired the search for many similar drugs that can take effect on the body just as quickly. However,  this search has often been a fool's errand. Our bodies are not pristine laboratories for physics experiments. As I said, they are like Rube-Goldberg machines, complex and remarkable when they work, but threatened by collapse and failure at multiple levels. The complex multi-layered metabolism of our bodies has its own way of healing; drugs and medical procedures can only facilitate this natural healing procedure. Despite great advances in medical science, the workings of our body remain mostly a mystery, and this is especially true for psychological ailments.

In fact, digital technologies are not unique in  creating a set of rich world diseases. People living in traditional hunting-gathering societies have very little problems with diabetes, coronary heart disease, or indeed, back pain.  Our modern sugar-rich diets and sedentary lifestyles are exerting hidden pressures on our internal organs, our immune systems, and our brains. These problems are systemic - stemming from how our society is organized, and not from individual personal choices. For example, the fact that the majority of drinks in the super-market contain massive loads of sugar predisposes us to making that choice for a sugary drink. The fact that the majority of our friends are on Facebook predisposes us to post on Facebook.

Perhaps, the most tragic of the modern diseases is Alzheimer's disease, where the brain shrinks as it becomes clogged by protein. This is a disease closely connected to diabetes (sometimes called type-3 diabetes). With Alzheimer's, the brain suffers a gradual loss in long-term memory, as well as in its capacity to form new memories. Sometimes, people experience mental hallucinations and delusions. In the intermediary stages, they may gradually lose their vocabulary. As the disease advances, people may suffer complete loss of speech and become unable to even perform simple tasks independently. This is a slow and painful way to die. I came across two remarkable essays  on how Alzheimer's disease affects the patients and their caregivers. You may read them at your leisure.

Even after a hundred years of intense study, there is no cure in sight for Alzheimer's. People still dream of a miracle cure, similar to Penicillin. But it is more likely that this disease will be similar to cancer, presenting many fronts on the battle. One interesting piece of information I learnt from Susan Pinker's book is that starvation in mice seems to have a protective effect against Alzheimer's disease as the mice get old. Several scientists speculate that this could be true for humans as well. Pinker also wonders if the elderly people she studied in Sardinia, who live exceptionally long lives, had a protective benefit against Alzheimer's due to starvation they suffered during the second world war. Why should starvation protect against a late onset disease like Alzheimer's ?

This got me thinking about evolutionary reasons, where people who suffered starvation during childhood may have an additional impetus to live older lives, and to keep their memory from deteriorating. I got reminded of Jared Diamond's book "The world until yesterday", where he recounts a singular case from Rennell Islands. In a different online essay, he  relates the same experience.
In 1976, I visited Rennell Island, one of the Solomon Islands, lying in the southwestern Pacific’s cyclone belt. When I asked about wild fruits and seeds that birds ate, my Rennellese informants named dozens of plant species by Rennell language names, named for each plant species all the bird and bat species that eat its fruit, and said whether the fruit is edible for people. They ranked fruits in three categories: those that people never eat; those that people regularly eat; and those that people eat only in famine times, such as after--and here I kept hearing a Rennell term initially unfamiliar to me-- the hungi kengi.  

Those words proved to be the Rennell name for the most destructive cyclone to have hit the island in living memory--apparently around 1910, based on people’s references to datable events of the European colonial administration. The hungi kengi blew down most of Rennell’s forest, destroyed gardens, and drove people to the brink of starvation. Islanders survived by eating fruits of wild plant species that were normally not eaten. But doing so required detailed knowledge about which plants are poisonous, which are not poisonous, and whether and how the poison can be removed by some technique of food preparation.  

When I began pestering my middle-aged Rennellese informants with questions about fruit edibility, I was brought into a hut. There, once my eyes had become accustomed to the dim light, I saw the inevitable frail old woman. She was the last living person with direct experience of which plants were found safe and nutritious to eat after the hungi kengi, until people’s gardens began producing again. The old woman explained that she had been a child not quite of marriageable age at the time of the hungi kengi. Since my visit to Rennell was in 1976, and since the cyclone had struck 66 years before, the woman was probably in her early eighties. Her survival after the 1910 cyclone had depended on information remembered by aged survivors of the last big cyclone before the hungi kengi. Now her people’s ability to survive another cyclone would depend on her own memories, which were fortunately very detailed.  

It seems likely that starvation in childhood may activate an epigenetic pathway that prompts the body to live a longer life and to be mentally agile in old age, in order to guide younger kin to stave off hunger and danger when the need arises. This is justified from an evolutionary point of view, whether such a thing exists in the physiology of the human body is an open question. If it does, it is very likely that we can fool our bodies into activating this pathway, by fasting at the right period of our lives.

There are two problems for testing such a hypothesis. Firstly, providing a systematic review of the causes and outcomes will take a very long time : especially if human tests need to be performed, as opposed to using other mammals as proxies. Secondly, there is no money to be made by such a discovery. No pharmaceutical company will become rich by suggesting that people fast. This is the tragedy of our economy. Our society is structured in such a way that important medical advances are being stunted.

I posed this question on fasting and Alzheimer's disease only as an illustrative example. The medical field is replete with such questions. More specifically, serious questions in psychology cannot be answered easily by simple experiments. Due to the very nature of the human body and its multi-layered physiology, we need to conduct experiments on a very large scale and across large timelines. We simply do not have the scientific apparatus today to do those experiments.

Things get even more depressing when you consider economics. If medicine is concerned with the physiology of a single human body, economics ought to be concerned with the health of whole societies (and whole eco-systems). These are, by definition, even more complex systems. But the trend in economics is to argue for simple theories. Very often, these theories are not even tested. Economists suffer very much from tribal affiliations - with opposing camps refusing to engage in a common dialogue and in a common framework for scientific enquiry. In any case, posing open questions that require long-term inspection is not encouraged in this era of high-frequency trading. So what we have in economics today is a pseudo-science, disguised in a plethora of numbers.

In 1949, the Nobel prize in medicine is awarded for a surgical procedure known as "lobotomy", where the neural connections are cut between the pre-frontal cortex (dealing with rational thinking and cognitive complexity) and the central areas dealing with emotions. This was professed as a cure for mental ailments such as Schizophrenia, but later understood to be a tragic disaster. Before this understanding dawned, many patients were lobotomized and turned into vegetables. The Nobel committee realized this fiasco fairly soon, but I wonder how long it will take them to realize such blunders with the Nobel prize in economics, where entire societies have been lobotomized.

The one hope that I have for the future is that it will be possible to model large complex systems directly from the data, using advanced machine learning. These models will ultimately replace the simplistic models in medicine, psychology, human-machine interaction and economics - with more accurate predictive models. This will take time, but ultimately we will realize the follies of the current age.

Sunday, October 11, 2015

A natural park for humans

Can humans live in ecological balance with robots ? Artwork by Robert Chew.

A brief history of ecological time: 

Prof. Stephen Hawking recently did a Reddit AMA, where he answered two distinct types of questions: about AI-pocalypse where an AI smarter than humans would spell the end for us, and about technological unemployment where humans lose jobs to robots and AI, that are not necessarily smarter than us in all aspects. I have previously written that the first type of questions based on AI-pocalyptic thinking are distracting. So I will ignore such issues and focus on what Prof. Hawking said on technological unemployment.

Have you thought about the possibility of technological unemployment (https://en.wikipedia.org/wiki/Technological_unemployment), where we develop automated processes that ultimately cause large unemployment by performing jobs faster and/or cheaper than people can perform them? Some compare this thought to the thoughts of the Luddites, whose revolt was caused in part by perceived technological unemployment over 100 years ago. In particular, do you foresee a world where people work less because so much work is automated? Do you think people will always either find work or manufacture more work to be done?

The outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.
Prof. Hawking's candid response ruffled a lot of feathers.  Marc Andreessen, the master venture capitalist of Silicon Valley, who created the Netscape Navigator web browser, and thus arguably created the world wide web as we know it today, has tweeted as follows.

In fact, this is a mischaracterization. Prof. Hawking was clearly arguing against greed and inequality  in capitalism, and not against the automation of work. He is not alone in questioning the current economic paradigm of liberal capitalism, driven mostly by the free movement of global financial capital. Many economists, such as Thomas Piketty in Paris, have documented the rising economic inequality of the current era. Many other economists, despite supporting the free market paradigm, are also questioning the instability and chaos wrought onto the market by modern financial innovations such as derivative trading, high frequency algorithmic trading and global financial deregulation. I have previously argued that the automation of finance is directly connected to the aggregation of data into monopolistic systems, which I termed as Octopuses. The architecture of the world wide web, as developed by Tim Berners-Lee and popularized by Marc Andreessen,  is responsible at a fundamental level for the growing monopolization of the global economy. Thus, it is the specific architecture of computer systems that we chose to develop, and not computation and automation in general, that are responsible for technological unemployment.

In this blog, I will argue that inequality and monopolization, though awful in their own terms, are actually symptoms of a deeper malaise, which is the destruction of biodiversity and ecological balance. This is a story much older than the advent of computers, or indeed, the advent of money.  It is a story as old as hundred thousand years, starting from the spread of humans outside Africa. Ever since that defining moment lost in the depths of time, we humans have systematically destroyed biodiversity on this planet, turning millions of plant and animal species extinct. The scary news is that this rate of species extinction is actually accelerating. In this long dance of death, only one species has been left unaffected - us. But for how long ?

The history of human civilization can be viewed as a long march towards peace, prosperity and health, as forcefully argued by Prof. Steven Pinker and visualized in nice graphs by Max Roser of Oxford University. Human violence has fallen dramatically over the past few centuries. People today in most parts of the world enjoy long peaceful lives in good health. This picture will only get better as  abject poverty continues to drop and medical science continues to progress. However, there are other graphs that illustrate the Anthropocene, the current geological era marked by species extinction and destruction of ecological habitats.  So where is the destiny for humans, in the first set of graphs or in the second set ? Can humans be separated from ecological interdependency that characterizes every other species on this planet ?

We humans are super-predators that kill fully grown adult animals, whereas other predators kill mostly juveniles that are still not ready to breed. Thus, we have successfully wiped out entire species by predation. We are also a highly adaptable species that survives in extremely different geographic climes - from the poles to the tropics, where we encroach onto the ecological niches of other animals and drive them to starvation. Due to our enormous brains and highly cohesive social structure, we humans are efficient virtualizers, who can simulate the behavior of other animals and outcompete them for ecological niches. When one species goes extinct, it brings down a whole network of species that co-evolved with it. Certain animal species, like rats and crows, as well as domestic animals and pets, have co-evolved with us and out-compete many other species to extinction. This aspect of virtualization of behavior gives a good hint of how robots and AI will relate to us humans. As Alan Turing has proved in the universal aspect of computation, computers can be used to simulate any type of behavior in a pristine mathematical sense. In this regard, robots will be even better than us at virtualization of behavior. So will we maintain an ecological balance with them ? Or will we be out-competed across all of our ecological niches ? This is definitely a question for ecological biology and not just for philosophy.

Oikouménē: A cartography of human habitat

Human societies typically considered themselves to be part of nature, although the modern capitalist society is an aberration from the norm. Every cultural tradition that arose in hunting-gathering or agricultural societies has been conscious of ecological limits in nature and developed norms to respect them. However, these lessons were only learnt in retrospect, after observing massive destruction of ecological habitats. Only much later were they encoded in ethics, religion and economic behavior of humans. In all of our past human history, we were able to find new habitats where the learnt lessons could be implemented - by slashing and burning more forest, by spreading across the oceans, by creating technology that enabled us to survive in harsher climes. However, the rate of our technological progress has shrunk the planet. We do not have a second planet to apply the lessons that we learn today. There will be no going back to the drawing table. Earth, the pale blue dot in the vastness of the cosmos, that supported life and shaped our evolution as humans, remains our only home. So there is an urgency to get to grips with the awesomeness of our destructive power, even if all we care for is the selfish survival of our own species.

Ancient Greeks were probably the first to seriously study the geographical habitat of where humans can survive. They termed it Oikouménē (Oikos is home in Greek). Anaximander, Aristotle and ultimately Ptolemy derived detailed maps of the earth that can be inhabited by humans. This gave birth to the disciplines of not only geography and cartography (map making), but also economy and ecology (both derive from the root word of Oikos). In fact, it is impossible to study geography in isolation to the qualitative aspects of economics and ecology. We can also argue that ecology and economics cannot be studied separately from each other, and from the limits imposed by geography. So how is it that today's field of economics is so completely devoid of geographical and ecological consciousness ? How is it that it became an esoteric quasi-religious discipline that establishes truths of its own - on market stability, growth, production etc., independent of ecological limits ? Especially in the fields of macro-economics and finance, where the decisions affect the society on a global level, this selective ignorance of ecological limits will be disastrous. There is a tendency in free market ideologues - the  school of economics that is the most influential today,  to disparage ecological limits altogether as inconsequential,  and subscribe to a belief in the miracles of technology to produce more with less. Green revolution, that expanded agricultural yields, is often cited as proof. But agricultural yields increased only because of concerted scientific investment and technological development after noticing and accepting a severe threat; not by ignoring the ecological limits altogether and not by performing laissez-faire capitalism.


Ptolemy's map of the Oikoumené, reconstructed from the description in his book "Geography". 

Today, we have the serious threat of climate change due to the incessant burning of fossil fuels. Despite the clear evidence that we need a moratorium on fossil fuels (vast reserves of coal, oil and natural gas should be left forever buried under the ground),  we keep funding the exploration of new fossil fuels. Our entire economy is deeply enmeshed with the financial interests of oil and natural gas. The network of financial capital, enabled by a loose regulatory framework on a global level, obscures these interests through several layers of virtualization of capital.  In fact, it is impossible to fix any single ecological issue without fixing the rut in financial capital, and the algorithms and computers that enable them.  It is important to note that climate change is not the only ecological threat facing us as humans. There are several planetary boundaries that have been identified, with varying degrees of precision by environmental scientists. However, unlike ancient Greeks, we ignore them completely in our study of economics and Oikouménē.

Planetary boundaries for survival of humans, more information is available in the book "The God Species" by Mark Lynas.  

The economic situation today is so absurd that it should put to rest any ideas that humans are under conscious control, either politically or economically, on our planet. Great wars are being waged today between nuclear armed countries, to secure the mines and distribution pipelines of fossil fuels. Vast populations are being bombarded and displaced geographically. The developed economies of the west are completely clueless about where to invest financially, so as to fund health insurance and  secure the retirement benefits of aging citizens.  Periodic booms and busts of real estate and related non-productive sectors are happening with increasing frequency. On the other hand, vast sections of developing economies are suffering from high unemployment and poor education, especially of the youth. Everybody is terrorized by the whimsies of capital flight, that can happen at the drop of a hat. It is important to note that there is no conscious human agency, either benevolent or malevolent, at the crux of these economic disasters. On the other hand, these faults are systemic - similar to the fractures and creases in a dam that will inevitably result in flooding. But unlike physical fractures, fixing economic fractures is not easy. We are up against the tangle of systemic interests that are deeply enmeshed, and operated today by computer algorithms. Many of these algorithms are closed source, either because they are part of secretive operating procedures of companies and financial institutions, or because they belong to governments but are hidden underneath legal jargon and diplomatic channels. Increasingly, the algorithms that make up a government are being rendered invisible using secret courts (such as the FISA court of USA) or trans-continental business partnerships that are not accountable to any country's law.

There is an important United Nations conference on Climate Change in Paris next month, to achieve a legally binding resolution on all countries to limit the burning of fossil fuels. But hopes are low everywhere. At the global level, the architecture of economic and political systems is so rotten that any agreement that is achieved then will be immediately violated, in spirit if not in letter. So if we are not able to fix the problems at a global level, what can we do to ensure an Oikouménē for the future generations of humanity ?

A natural park for humans 

Yellowstone national park: the first national park in the world 

The answer to this question might be what American novelist Wallace Stegner called the best invention of USA.
"National parks are the best idea we ever had. Absolutely American, absolutely democratic, they reflect us at our best rather than our worst."
In 1871, an American geologist named Ferdinand Heyden has achieved the status of a National Park for the Yellowstone region, after feverishly campaigning for several years. He argued that if the federal government fails to protect the territory, nothing will remain of this beautiful landscape.
"The vandals who are now waiting to enter into this wonder-land, will in a single season despoil, beyond recovery, these remarkable curiosities, which have requited all the cunning skill of nature thousands of years to prepare."
The Yellowstone national park gave birth to a widespread movement all over the world, where large tracts of land were set aside by the governments, where sensitive natural ecological balance is maintained and spared from human interference.  It is crucial to note that the value assigned for these national parks is not human monetary benefit, but a spiritual value in letting nature be for nature's sake. It is a value that cannot be measured in human currency of whatever make.

From the very beginning, the involvement of humans in a national park has been up for debate and riddled with racist and classist overtones. Traditional people, such as hunter-gatherer tribes, have long been living in natural landscapes by maintaining ecological balance. With whatever tools and technologies under their employ, they have learnt to curtail their tendencies of greed and over-exploitation of nature, by learning the lessons the hard way over generations. Now as their lands and livelihoods became federal property, they were displaced and succumbed to living in despair under tribal reservations. In many cases, their languages and cultural lore became extinct, as it was devoid of the essence of living in their  natural environment. This specter of cultural extinction is a constant throughout human history and it is accelerating today, just like the parallel rate of biological species extinction.

If we widen the definition of cultural extinction to include the extinction of professions and regular daily routines of people, the story will get much worse. Over the course of civilization, humans have specialized into varied professions based on the specifics of local geography and the availability of economic resources. Often these professions existed for centuries and over generations, thereby crystallizing into a cultural lore on how to lead a happy life in the society. Even today, people who lead long and happy lives invariably live in socially cohesive village societies, whose support is as essential in old age as medicines invented by science. However, in the last couple of centuries, this idyll of village life is getting eroded as many professions disappeared and people were forced to migrate in search of transient opportunities. With automation and robotics, these professions have become even more transient, requiring a nomadic existence where the daily routine will never gets crystallized into a precise set of norms. This constant disruption of routine (and the threat of unemployment) will produce anxiety, to which our species has not been prepared by evolution, even with our large brains and adaptable societies.

So why is it that we refuse to cast humans as deserving of ecological protection, just as trees and animals in a natural park ? We can answer this question in two distinct ways. I believe one answer will lead us to racism and narrow-minded ness, whereas the other answer will lead us to compassion and happiness.

The first answer is centered on "us", trying to protect our specific professions and lifestyles. We can do this by framing laws and closing borders, even as we ignore the plight of other humans and other animals. This is the impetus of conservative politics, that I do not support, even as I understand the anxiety and tensions behind them.  Essentially, conservative politics is blind to a paradox: if we claim protection on the basis of ecological grounds, how can we ignore the plight of other animals and humans ? Are they not as essential as us for maintaining the ecological balance ?

The second answer is centered on "them", trying to protect the specific professions and lifestyles of other humans. This needs to be done beyond any monetary value for justification. When we create a national park, we do not try to measure the value of nature in a human currency, but associate a greater "spiritual" or "ecological" value for the mere existence of nature in balance. Similarly, when we try to protect the professions and lifestyles of other humans, we need to protect them for their own sake. Can we humans be grand enough to overcome selfish greed and protect other humans and animals ? Can we be vigilant enough to protect the ecological balance of other humans and animals, even if we get nothing in return ? Indeed, there is one direct benefit with this type of thinking: compassion leads to happiness.

We can start with cases that urgently need help: traditional people living in subsistence lifestyles throughout the world are under threat. Their lands are being encroached by mining corporations, loggers or livestock farmers. They suffer high rates of sexual and criminal violence. Can we sacrifice something to protect their traditional lifestyle even if we do not share their ethics or morals ?   Can we preserve their languages ? We can then widen this net of compassion for marginal groups of people that live within our own society: sexual minorities, drug addicts, undocumented immigrants. Can we sacrifice something to preserve this lifestyle of others, even if we do not agree with this lifestyle and do not believe in a common destiny with them ?


Raising an AI with good parenting

These questions are not mere philosophical nuisance. These are practical steps to preserve our own Oikouménē, as it comes under threat by robots and AI.  BBC has recently interviewed economists Erik Brynjolffson and Andrew McAffee, where they gave advice to readers on which jobs will be hard to be replaced by AI (and thus relatively stable). This line of thinking is too deterministic and focussed on the current moment. In fact, trying not to be replaced by a robot is a race that gives no pleasure to any human, as the robots will eventually catch up. If we are bound to run this race, we are definitely screwed.

Most people think of robots as alien beings. Even people who believe in the eventuality of strong AI -  i.e, an emergent artificial consciousness similar to human consciousness, still imagine this AI to be devoid of  any ecological connections to the environment.  Essentially, this is indicative of how desensitized they themselves are, as humans, from the environment. I found it illuminating to think about this aspect in popular movies and books. How many animals have we seen in the Matrix trilogy of movies ? It is as if the entire movie happens in an ecological vacuum consisting of humans and robots, without any other biological life. In general, how much of wildlife do we see in any movie or books about AI ? Isn't it absurd that any AI that comes out of this planet will be disconnected to the ecological niches on this planet ?  If we as humans create good ecological niches that require compassion, any AI that comes to occupy these niches will be compassionate to us. On the other hand, if we create ecological niches that blindly optimize for selfish greed, we cannot guarantee that the interests of AI will be aligned with those of us humans.

Dylan Evans, a British cognitive scientist,  said in a radio interview recently that two things seem to happen to people working on intelligent systems. They either become ecstatic believers of a rapture-like moment, where AI will solve all our problems, or they become paranoid about the terrible enslavement of humanity by a super-complex system.  Bill Joy, the creator of the Java programming language, was the first one I knew of who flipped. Dylan Evans himself flipped to apocalyptic thinking a few years ago. The problem with either ecstatic or apocalyptic mental states is that it is hard to think rationally and reasonably. I believe that having an ecological mindset will help computer scientists to cultivate more nuanced opinions about AI.

I will end this blog on a personal note. I recently attended a brilliant talk by Prof. Tom Mitchell of Carnegie Mellon University (CMU), on how to raise a machine.  This happened at an event celebrating 25 years of Max Planck Institute for Informatics, where I am an alumnus. But I am also an alumnus of CMU, where I was a visiting student twelve years ago. When I was at CMU, Prof. Mitchell's course on machine learning was over-registered and the lecture halls were always filled to capacity. So I was not able to take his course, although I sneaked into some of his lectures. At that time, he just finished an introductory text book on machine learning. The field has advanced greatly in the last ten years, so it was a great opportunity for me to hear from Prof. Mitchell what he thought of the progress. He spoke of the paradigm of function approximation and optimization that achieved the greatest success in machine learning in recent years. But he admitted that this was also something of a disappointment. He recalled a conversation with his mother, where he was trying to explain what machine learning means, when she said

I know about learning. I raised you. 
This is a great analogy. Human children learn about a lot of things (a lot of functions to approximate), and they learn how to learn. Raising a machine needs to be done in a similar fashion. With this as inspiration, Prof. Mitchell created NELL (never ending language learner) - a machine that is never shut down and  that has been trying to build a conceptual model (ontology) of the world, by learning over several years from the web.  This is the closest I am aware of, to an AI that is capable of reaching human-level consciousness, even as it stands extremely far from this goal.  As we see, such a machine needs to be "raised", from examples and good parenting. If we humans create awful examples - by exterminating our ancestors, by destroying eco-systems, by eliminating rivals without compassion - we should not complain if our children (AI) will do the same to us.