Wednesday, November 25, 2015

The four sided negation: A tool for overcoming bias in science, atheism and democracy

Towards the end of the 12th century, the Islamist raider Bakthiyar Khilji ransacked the ancient Buddhist university of Nalanda in eastern India. Apart from the many students and monks who lived there, the monastery also housed a large library, similar in extent to the great library of Alexandria in Egypt. After he looted the wealth of the monastery and put to death many monks, Bankthiyar Khilji was asked to decide on what to do with the books in the library.

Khilji  first asked if there was a copy of the Quran in the library. The answer was negative. He then asked to burn all the books in the library. "But valuable knowledge in the books will be lost forever",  warned the caretakers of the library. Khilji famously answered as follows

"If there is a book that says anything against the teachings of the Quran, then it is blasphemous and should be burned. If there is a book that agrees with the teachings of the Quran, then it is superfluous and should also be burned. Please take care that no book is left unburned."

 The Persian historian Minhaj-i-Siraj who was accompanying the raid, recounted that the fires from the embers of the library kept blazing for several days. This action might horrify us today on account of its senselessness and violence, but there is a twisted logic to what Khilji said in defense of burning the books. An echo of this logic can be found in the pronouncement of George W. Bush after the 9-11 attacks on New York City in 2001.

You are either with us or you are against us! 

Bush did not see any shades of gray in the vast canvas of the political landscape of the world. Similarly, Khilji classified knowledge in a strict binary fashion, that which is in agreement with the Quran and that which is not in agreement. He considered Quran itself as all-encompassing in knowledge, so this strict binary classification precipitated his decision. But what was lost in the many books burnt in Nalanda was not only a vast compendium of facts and knowledge, but also a philosophical commentary on the limitations of binary classification of knowledge. Indeed, the most profound examination of these limitations was done by Nagarjuna, a Buddhist scholar who many centuries earlier served as a rector to the Nalanda university, in the third century AD. But what Nagarjuna elaborated on was an even more ancient tradition from India, tracing its roots at least three thousand years deeper into history when it was codified in the Vedas - the holy scriptures of the Hindu religion. According to this tradition, logic is not 2-valued (as commonly understood as true/false), not even 3-valued (as commonly understood to denote shades of gray), but is 4-valued. This 4-valued logic is known in Sanskrit as Chatuskoti. For a long period of human history, this logic reigned supreme in Asia. Eminent philosophers and logicians in China, Japan and southeast Asia have painstakingly translated the Buddhist scriptures into their native languages, the most cherished ones being the commentaries written by Nagarjuna.

I think it is sad that this ancient logic is lost today in practice. Due to its association with religious and spiritual discourse, this 4-valued logic is sometimes considered unscientific, but nothing can be farther from the truth. In fact, the most important applications of this logic have been in objective fields. In this blog, I will argue that considering 4 sides to a negation, instead of the usual 2 sides, is a necessary requirement for both scientific study and political discourse. Essentially, negation is the most elementary operation in language. We cannot think without using negation on different concepts and ideas. But every time we apply negation, we are introducing a strong bias into our thinking, which will eventually lead us to severe errors and paradoxes. We will examine this further, while taking some interesting detours in history.

Introduction to Catuskoti or the 4-sided negation: 

Figure 1: Venn diagram illustrating binary logic 

Let's consider a simple proposition "Cheetah is the fastest animal" and denote this by the symbol A. In regular logic, this proposition is either true or false. When we think of a proposition, we usually visualize it in the mind's eye on a straight line-segment: akin to a barometer of falsehood. On one end lies truth and on the other end lies falsehood.  When we are certain of the truth value of a proposition, this line-segment dissolves into two tiny dots, standing apart from each other, which we can call "A" and "Not A". However, this visualization is incorrect.

Instead, what we need to visualize is the  Venn diagram shown in Fig 1. When we frame a proposition, we create an island "A" in a vast sea of possibilities, known as the universe Ω. When we use strict binary logic, we associate "Not A" to everything in this Ω except the island denoted by A. Depending on how many words we use to describe A and how clearly we describe it, our knowledge of this island increases. But the vast sea surrounding this island can be still a mystery to us. For example, to elaborate on our example, a Cheetah can be described as a wild cat with a yellow fur coat and spots, that it is native to Africa and that it hunts gazelles. To go to the extreme, we may describe it by giving the entire genome of Cheetah in the code of DNA. But what exactly is "Not A", what differences to the genome of Cheetah will classify it as a Non-Cheetah ?

Figure 2: Venn diagram illustrating fuzzy or probabilistic logic 

As our knowledge about other animals increases, we can measure our understanding by creating an island of ambiguity "A and Not A" that is encroaching into the island of "A" in the Venn diagram. This island measures the ambiguity of the proposition, inherent to the finite length of description of A.  As we conduct experiments into how fast Cheetahs run as opposed to other animals, we can measure the probability of A being true. Ideally, this island of ambiguity shrinks and we get back to our binary world.

But in reality, we still have no complete knowledge of the universe Ω. What exactly is an animal ? How many other animals do we know ? What exactly do we mean by "fast" ? Can bacteria floating in rapid ocean currents be considered animals ? This is a case we have not considered when we framed the proposition A. Unlike the island of ambiguity, this uncertainty is not related to A but to the universe Ω. Specifically, it refers to the limitations of our understanding when we say "Not A", its overlap with Ω is not as complete as we imagine it to be. If we correct for this, we obtain a visualization of the 4-valued negation as follows.

Figure 3: Venn diagram illustrating Chatuskoti, or the 4-sided negation. 

This 4-sided negation creates four distinct zones in the universe Ω . With increasing knowledge,  our understanding of all these 4 zones increases, but the zone of "Neither A nor Not A" may remain forever. This is because we may predict that we don't know something about the universe, but we may not be able to express exactly what it is that we don't know.  Please note that this limitation is due to two things: (1) our understanding of the world and (2) the limited expressibility of language. Even if we allow humans to be omniscient (in whatever finite context for Ω  specified by our problem), the ambiguity in the language may never be resolved. This is a mathematical fact as well as a philosophical realization,  that is achieved very recently in Western philosophy. We can refer to the works of Bertrand Russel and Ludwig Wittgenstein, as well as the mathematician Kurt Godel. In contrast to the west, these  limitations were studied from the very beginning in Indian philosophy and logic.

 In the traditional Indian philosophical system of Vaiseshika, the universe is supposed to consist of many tiny atoms. In the Venn diagram above, this translates into Ω  being composed of many (possibly infinite) tiny discrete samples. When we use negation, it is like cutting the above Venn diagram with a knife. Our knife may not be sharp enough to cut precisely through the different atoms. Thus, the 4-sided negation is a fundamental tool in logic that acknowledges this limitation.

Unlike the island of ambiguity given by "A and Not A" which is addressed by fuzzy or probabilistic logic, the 4 zones of Catuskoti cannot be reduced to a single line segment between 0 and 1. Indeed, such a reduction is topologically impossible. What we need is a different mathematical construction. I am not aware if there is anything that fits to the bill.

I find it useful to employ an additional tool along with the traditional line-segment between 0 and 1, with which to measure probabilities. I suggest using a circle, to which the entire universe Ω  is projected. This circle can be visualized as roughly centered at the island "A". At each point on the circle, we can measure our uncertainty in predicting the world towards that direction. Unlike the probabilities, this uncertainty (ignorance) can be unlimited, but it can still be characterized in some way. For example, we can say that our ignorance is greater in certain directions than in others. The metaphor of a circle is also used in defining what a "well-rounded" education meant. Until a few decades ago, the education of a person was not considered complete until he achieved certain basic knowledge in various diverse fields. Without this fundamental knowledge on all directions, it was considered that any decisions or conclusions made by a person would be faulty. However, with increasing knowledge and diversification of scientific fields, our society has sadly reduced the impetus on well-rounded education. Though this is sad as it is, I think it is a symptom of a deeper problem in logic where the 4-sided negation was forgotten.  So where and how was this innovation in logic forgotten ?

The story of Ramon Llull and how European renaissance missed the 4-sided negation: 

Ramon Llull was a Catalan philosopher in the 13th century who was critically important for the historical development of European renaissance. He has traveled extensively and spent a significant amount of time in North Africa, conversing with Arabs in the Moorish centres of learning. During his time (roughly termed the middle ages), Europe has sunk into the depths of ignorance, with the classical discoveries of Greece and Rome forgotten. In contrast, Arab civilization was in full flourish, where the Arab mathematicians and astronomers have compiled and built upon the knowledge from multiple lands ranging from Greece to India and China. It was the contact with Arabs, in southern Spain and Italy, that sparked the fire of European scholarship. Ramon Llull was a critical piece of this puzzle.

Arab astronomers were interested not only in mapping the course of the stars and the planets, but also in understanding how this connects to the human psychology. These esoteric sciences of astrology, alchemy and parapsychology are shunned today as superstitions, but in earlier periods, all scientific investigation was done at the behest of such esoteric subjects. The fundamental goal of these scientists was to liberate human condition from suffering and achieve harmony with the universe, the objective study of natural phenomena was only a means to this end. The harbinger of modern science, Issac Newton, was himself a practitioner of alchemy. With Ramon Llull, these lines were blurred even further. The Arab astrologers were developing upon the ancient science of Indian astronomy and astrology (known as jyothisha). In ancient Indian mythology, the circle denoted the universe and the square denoted the human understanding of this universe. The Vedic sacrifice required mathematical understanding of how to convert a circle into a square. These esoteric ideas spurred a lot of mathematical development - arithmetic, algebra, trigonometric functions, infinite series and so on. Many of these ideas reached Europe via an Arab translation of an Indian text called  Brahmaguptasiddhanta, (translated into Arabic as Zij-al-Sindhind by the astronomer Muhammad al-Fazari). The Arab astrologers codified these ideas into a mechanical system that makes the right predictions irrespective of who is using the system. In one form, this mechanical system became a device known as a Zairja - the first human imagination of an automaton.

Ramon Llull was deeply inspired by the Zairja and created an esoteric art of symbolism that he termed Ars Magna. This was supposed to represent all the aspects of the universe, as well as all the aspects of God. By representing all the possible aspects of knowledge in a cartographic manner, he created the first representation of a "well-rounded" knowledge. He wanted to develop a mathematical automaton for debating with the Muslims, in order to win them over and convert them to Christianity. The legend goes that he was pelted with stones in Tunis when he announced that his Ars Magna could not be understood unless the person  accepted Christ as the savior. After his death, Llull was beatified by the church, but his most sincere disciple was burnt at stake. This was Giardano Bruno, who developed upon Ars Magna to denote all the cosmological concepts and who had come to the conclusion that sun was at the centre of the universe (and not the earth, as mentioned in the bible). Galileo escaped a similar fate from the church by recanting his sayings, although he could see the motion of the planets in his telescopes. This uneasy relationship with astrological and esoteric sciences persisted in all Abrahamic faiths, with practitioners often being accused of worshipping the devil. This is inevitable, because these sciences stemmed from pagan cultures like Greece and India.

The influence of Llull extended beyond astronomy. Leibnitz made his dissertation about Llull's art, and this greatly influenced his future pursuits: calculus and logicism in philosophy. He called this Ars Combinatoria.  Along with Newton, Leibnitz was the fundamental mover of the European renaissance in mathematics and physics. Many mathematicians tried to follow in Leibnitz's footsteps, to ultimately realize a consistent and mechanical system that can predict everything. The latest expression of this dream was by Hilbert, who tried to reduce all mathematics to a set of axioms. However, this adventure was cut short by Godel, who proved that inconsistencies inevitably will creep up in any system of axioms if it tries to be complete (describe everything). This crazy pursuit in history has been described aptly by the computer scientist Gregory Chaitin (who along with Vladimir Kolmogorov is reckoned as the founder of the field of algorithmic information theory). To this day, the field of knowledge representation in computer systems, ranging from databases to the internet, is indebted to Ramon Llull and the Zairja of Arab astrologers. The amazing thing is that this historical line of development has completely missed the Catuskoti or the 4-sided negation in logic. It is a mystery why this has not reached Arab scholarship from India, and from there to Europe. It is unlikely that all the knowledge disappeared with the burning of the Nalanda library.

If Llull was aware of the Indian tradition of logic, he wouldn't have expressed that the people of Tunis needed to convert to Christianity to understand his Ars Magna.  He would know that Ars Magna will forever be a work in progress, and forever be deficient of its goals. This was ultimately expressed in late 1800s by Emil-du-Bois Raymond as "Ignoramus et ignoramibus", and at that time, he was soundly ridiculed for saying this.

The history of western logic had another brief brush and fascination with India. The English mathematician George Boole was deeply inspired by Indian philsophy. His wife Mary Everest Boole, a significant mathematician of her own right, was the bridge to India. Her uncle Everest was a great adventurer in India and gave the eponymous mountain its name. Mary Boole also was in correspondence with Jagadish Chandra Bose, who was a noted Indian scientist at that time. She wanted him to study the efforts of her husband, but this letter was left unanswered by Bose. George Boole developed an elaborate system of logical inference by using the principles of algebra, which he wanted to apply for esoteric ends (salvation, transcendence etc.) Nobody at that time believed that this system would have any practical use. This changed dramatically when Claude Shannon used Boolean logic as the basis for his mathematical theory of information. It is an open question if the development of information theory would have taken a different course if Boole incorporated the 4-sided negation into his logic. However, this did not happen and computer systems today are mostly restricted to 2-sided or 3-sided logic (Infinite-valued logic, like fuzzy systems, are actually extensions of 3-sided logic as illustrated in Figure 2).


Philosophy of Nagarjuna 

In philosophy, western philosophers starting from Kant, have expressed doubts about the limitations of spoken knowledge. Russel and Wittgenstein have systematically characterized the various dilemmas and paradoxes that arise from these limitations of expression. In Indian philosophy, this awareness of these limitations was already fundamental due to the Catuskoti logic. The onus in Indian philosophy was not about finding these limitations, but about achieving spiritual transcendence by overcoming these limitations, even if it is not possible to express them in spoken language.

Historically, Indian philosophy was divided into 9 streams or viewpoints (Darshana in Sanskrit). Of these 9 streams, 6 are in support of the existence of a conscious self (Samkhya, Nyaya, Vaiseshika, Yoga, Mimamsa, Vedanta) and 3 are in opposition to the existence of self (Buddhism, Jainism, Charvaka philosophy). But all these systems germinate from the same root and stem, which is the Samkhya tradition, which provides the mythological imagery and symbols to argue different concepts. I have earlier written an introduction on this blog to the Samkhya system (meaning enumeration), clothing traditional philsophical terms in a modern garb of vocabulary, so that we can better understand them.  Mythological images can better express notions of paradox than linguistic symbols and logic. So many Indian religions have explored this route, developing a plethora of symbols and stories. The idea is that transcendence could be achieved through devotion, or love, or following the right action, without the necessity of knowledge expressible in the spoken form.

However, the path of knowledge was not altogether absent. Philosophical systems using logical inference and deduction were developed first by the Nyaya and Vaiseshika traditions, which are based on the Catuskoti. In this way, knowing these traditions (or viewpoints) was considered essential for greater philosophical study in other systems. The first significant challenge to these systems was given by Gautama Buddha. The Jain tradition also has an elaborately developed logic, criticizing and thus extending the earlier Nyaya systems. Probably, the most extensive development of logical inference for transcendental goals was done by the Buddhist philosopher Nagarjuna in the 3rd century AD. The Australian philosopher Graham Priest, who specializes in dialethist logic (or para-consistent logic) wrote a great introduction to Nagarjuna's philosophy, from a western perspective. He believes that the vision of Nagarjuna was greater than even Wittegenestein and Russel, as he observed greater paradoxes.

I will try to give a short introduction to Nagarjuna, but before we go there, we should know that he remains a profound influence in many Buddhist countries. His Madhyamika tradition (or middle-way) has deeply influenced the development in China and Japan. For example, when we speak of  Japanese koans (or riddles), they stem from a tradition that goes back to Nagarjuna. To get a more thorough overview of this philosophy, one should consult these living traditions. What I can give in this blog is a very brief glimpse. In India, the influence of Nagarjuna had been profound on Shankara, the philosopher who popularized the Advaita tradition in Hinduism. It is Shankara who is probably the closest link to existing religious practices in India and its ancient philosophic tradition. I believe that without understanding Nagarjuna we cannot decipher that link. I am not capable, in any measure, to do undo these mysteries. But I think I can give a hint.

Recall that we represented the Catuskoti using a Venn diagram with four zones, for studying an objective question such as "Is Cheetah the fastest animal ?". But any such objective study is limited to the very ground on which we observe the different phenomena. In other words, we are restricted to the 2D plane of the Venn diagram, although the real universe may have many more dimensions. The computer scientist Alan Kay once gave a good analogy to this situation. We can imagine that we are in the crevice of a deep gorge, like the grand canyon in USA, which is shaped by several eons of erosion and landscaping. By standing in this gorge, we can observe the many layers of sedimentary rock that reflect the movements of history. But our vision will be restricted by these rocks in the gorge. Very rarely we take a fleeting glimpse at the sky and wonder what may lie beyond. Alan Kay calls such a glimpse a "Kerpow", a punch that pushes oneself beyond the dimensions of the world they are living in. In Buddhist and Advaita philosophy, salvation or transcendence is considered to be obtained  by a series of such "kerpows".

Firstly, we do not even know the topology of the 2D plane of reality that we showed in the Venn diagram. Buddhist and Hindu philosophy considers this as "Samsara" - a cyclical world where we keep retracing the same steps. I think the Dutch artist Escher gave a brilliant visualization of such a world, with ants walking along a Möbius strip.


Figure 4: Ants walking around a Möbius strip - illustrating the Buddhist Samsara 

Nagarjuna considered the problem of logic in such a world. A rationalist person is like an ant walking around this Möbius strip of logical constructs (zones of Catuskoti). In order to escape from this restrictive world, Nagarjuna developed a 9-valued logic. The first four zones correspond to one-side of the plane, as illustrated in Figure 3. The next four zones correspond to the other side of the plane, an inversion of the previous four zones. Nagarjuna said that an inquisitive and philosophical mind will not be satisifed with either of these 8 zones and seek a 9th zone, which is an escape from the Möbius strip, into the three-dimensional space. Obviously, this is a procedure that has to be repeated several times, as we approach concepts of greater logical ambiguity. In Buddhist mythology, this is akin to climbing up the Meru mountain at the top of which lies salvation or Nirvana.

A very similar rhetorical exercise in negation is undertaken in the religion of Advaita. A follower of this tradition performs negation in a series of steps called "Neti Neti" ("not this, not that" in Sanskrit).

As we can see, Buddhist and Hindu philosophy graduated from 4-sided logic that is suitable for objective analysis to more complex 5-sided or 9-sided logics suitable for spiritual investigation. However, this gave a bad image for 4-sided logic, which got confused with esoteric and spiritual agenda. I believe a proper application of 4-sided negation to objective scientific fields will give fruitful dividends, especially in situations where our ignorance of the field is immense but predictable.

How to overcome bias in atheism and democracy: 

Bias in science is dangerous, but bias in politics has immediate consequences for humankind. Even if people don't have a thirst for knowledge, they need to survive through their daily struggles. In this way,  if our political discourse has serious limitations related to the logic we deploy, we will face the trauma of misunderstanding and violence.  Unfortunately, the majority of people who have active political opinions today use 2-sided or 3-sided logic in their heads. That means, they do not understand the 4th zone (neither yes nor no) that is reflective of their relative ignorance of the subject.

A faulty language in debate is like a bad posture of the human body - when used repetitively, it results in pain. A great analogy is with the case of vertigo that happens due to tiny calcium crystals that accumulate in the inner ear, destabilizing the balance of proprioception. People who suffer from this vertigo feel that their world is constantly spinning. Needless to say, this is a nightmare condition. A cure for this vertigo has been found very recently by an American doctor named John Epley. Using a 3D cast of the human ear's anatomy, he devised a series of motions (called the Epley maneuver) that moves these particles from the troublesome spot in the ear to a non-bothersome spot. The relief felt by the patient is dramatic and immediate, almost leading them to suspect of black magic. I think that heated political debates will benefit from the 4-sided negation in a similar manner.  By understanding how our language and words are framed, and how limited they are in expressing certain ideas, we can overcome our ignorance and conceit.

Of all the debates in politics, I think nothing is as absurd as the debate on religious freedom.  This is because nobody wants to really understand the other side in a religious debate. The famous (notorious) atheist Richard Dawkins recently put this as "If I want to repudiate Leprechauns, I should not be expected to understand Leprechaun ideology". This laziness is understandable. However, what is not excusable is the reluctance to see one's own ignorance. By using 4-sided logic, instead of the more intuitive but faulty 2-sided logic, we will develop a habit of systematically understanding where our ignorance lies.

In France, there is currently a heated debate about the limits of religion. Owing to its historical revolutionary spirit against the catholic church and monarchy, French political life is very secular. However, this secularism comes from the pick-axes and shovel-fights of the 17th century, where all the debates were fought using the 2-sided negation. The secularist in France is highly confident in himself about how right he is. In this way, he is not much different from a rabid religious fundamentalist. The problem here is not with the ideology, but with the 2-sided negation.

Wise people of many cultures, including religious people, have learnt to be conscious of the limitations of their knowledge. In India, where many cultures and religions intermingled in history, people have high tolerance for  ambiguity. A person may be Hindu, but pray in a church, or seek blessings from a Muslim saint. A musician may be a very deeply religious Muslim, but still offer prayer to Saraswati (the Hindu goddess of learning). A good example was Ustad Bismillah Khan, a gifted Shehnai player. There were thousands of examples of Hindu and Muslim saints who cherished the company and customs of other religions. My favorite example was Kabir - a devoted Muslim poet who sang praises of Rama. The evidence of this cosmopolitan culture in India is present in all religions, including Christianity and Islam. All these religions are integral parts of the rich tapestry of Indian culture today.

There has recently been an opinion piece in New York Times, comparing the state of secularism in India and France. I find this essay to be a nice illustration of the problems with a 2-sided secularist outlook. France had an awful history of religious discrimination against Jews, Protestants and now against Muslims. By comparison, the history of India is a miracle, with very few incidents of religious violence for such a large population. In my opinion, the reason why India achieved this is not because of greater scientific knowledge or better legal rights, but because of a cultural assimilation of the 4-sided negation, which allows for greater tolerance in debate. In contrast, due to its reliance on 2-sided black and white logic, France is at a greater danger of discriminating against its own citizens.

However, India is not an idyll of peace. Its traditional tolerance is today under great threat. The urge for purity and austerity, as made notorious by the Islamist fundamentalists, is encroaching every religion. For example, in India, the Hindutva brigade is creating a cartoon version of Hinduism where the 4-sided logic is truncated into a 2-sided one. This desire for bifurcating everything in this world into two distinct boxes of good and bad, or pure and evil, is a self-destroying engine. Happiness and salvation in this world lie in a tolerance for ambiguity and in a consciousness of our limitations. This is as true of atheists as of religious people, of whatever religious stripes.

The works of Shakespeare are considered a turning point in western literature, where the internal dilemmas and mental confusion of characters are shared directly on stage. A human being is thus presented as a process in the making, and not as a finished product. The most famous example is the soliloquy of Hamlet "To be, or not to be".  This introspection of our mental processes is essential for us to be a better person. However, the story of Hamlet  was a tragic one. Hamlet succumbs to the guilt of murder. However, if Shakespeare used the 4-sided negation in the soliloquy of Hamlet, following the lead of Nagarjuna, he would have asked

"To be, or not to be"
"Or to be and not to be"
"Or neither to be nor not to be"
"Is there still a question ?"

A dramatic gesture would be to make Hamlet nod his head using the Indian head wobble. I will end this blog with this funny note. I believe that Nagarjuna's philosophy is responsible for this gesture, that stumps many foreigners who visit India, who cannot understand whether their Indian friend is saying yes or no. For greater clarification, they may ask,

"Do you mean yes ?" 
"Or no ?"
"Or yes and no ?"
"Or neither yes nor no ?"

"Or did you just reach nirvana ?" (This is an extremely rare case) 

Addendum: Representation of Chatuskoti using Predicate logic


After discussing with Joscha Bach (http://palmstroem.blogspot.com) on Twitter, I realized that Chatuskoti can be implemented in predicate logic using 3 predicates: A, Unknown(A) and Unknown(Not A). When these 3 predicates are treated as binary variables, they give rise to 8 states. More interesting things can be represented when these predicates are treated as continuous variables, able to take any value between [0,1]. The space spanned by these 3 predicates can be visualized as a cube standing on 3 axes. I propose to call this Gotama's cube (referring to the ancient Indian logician who developed Nyaya Sutras). The 8 states appear as corners of this cube, which can be enumerated as follows: 

0) Unknown Not A, Unknown A, Not A
1) Unknown Not A, Unknown A, A
2) Unknown Not A, Known A, Not A
3) Unknown Not A, Known A, A 
4) Known Not A, Unknown A, Not A 
5) Known Not A, Unknown A, A 
6) Known Not A, Known A, Not A 
7) Known Not A, Known A, A 

Of these 8 states, the states (4,5) are indistinguishable. So for practical purposes, they can be considered as 1 state, marked as "Unknown A". Similarly, the states (0,1,2,3) are indistinguishable. So they can be combined into a single state marked "Unknown Not A". With these 4 states, we get back to our traditional representation in the Chatuskoti. The clubbed state "Unknown A" is equivalent to "A and Not A". The clubbed state "Unknown Not A" is equivalent to the state "Neither A nor Not A". 

Why do we need 8 states in predicate logic and not 4 to represent Chatuskoti ? 

This is because Chatuskoti states clearly that only 1, and not any more, of the 4 states can be true: A, Not A, A and Not A, Neither A nor Not A. 

Because of this, we cannot map either of the states (A and Not A) or (Neither A nor Not A) to the predicate "Unknown (A)". This is because the predicate Unknown(A) can be true at the same time as A or Not A. This is the natural combination of predicates in predicate logic. However this is strictly forbidden in Chatuskoti. So the traditional method of handling uncertainty in predicate logic, using the predicate Unknown(A), which by the way is different from fuzzy or probabilistic logic without any additional predicate, is still insufficient to map the space of Chatuskoti. 

However, it is possible to cover all the 4 zones of Chatuskoti with 8 states, given by 3 predicates, by allowing for some redundancy. Please note that this is not the most compact representation of Chatuskoti. In theory, you can have just 2 predicates to map these 4 states. However, this predicate is not the usual "Unknown (A)". 

I am not sure if many people use 3 predicates, specifically an additional predicate "Unknown (Not A)" apart from "Unknown (A)" in computer-based logic and knowledge representation. This is a fairly simple method to adopt, and will potentially give rise to more stable knowledge representations. If you are a researcher in knowledge representation or machine learning, please go ahead and try it. All you need to do is modify your traditional error bars with 2 confidence values instead of one confidence value. :) 

You can also visualize the uncertainty on Gotama's cube : Using the HSV representation, map the hue value to the axis (A-Not A), map the saturation value to Unknown(A), and map the intensity value to Unknown (Not A). The interesting thing with this visualization is that it exposes the aspect that without saturation, you cannot distinguish hue. And without intensity, you can distinguish neither saturation nor hue. But in lighter shades, the colours become better visible. 

Do let me know if you do any such experiments. I will be happy to report them in the comments here. 




Friday, November 13, 2015

Of electric fences and free software: A cybernetic theory of freedom


 Cattle imprisoned by an electric fence: Illustrating a cybernetic conception of freedom 

On privacy and freedom: 

There is a remarkable project under development called the Freedombox, which is trying to put a collection of privacy software for browsing securely on the internet. This is built on top of the Debian stack of free software, and includes the powerful Tor browser and hidden services. I recently saw a great presentation of Freedombox and its features by Sunil Mohan Adapa, who is a good friend of mine from my university days.  After the Snowden revelations about industrial scale spying on the public by NSA and other government security agencies, there has been a greater demand for software that protects the privacy of the users. However, encryption software is typically difficult to use for the general public. This put them out of reach of most public, except for technically sophisticated programmers as well as actual criminals who will be careful to cover their tracks.

Why do the general public deserve privacy on the internet, if they are not doing anything wrong ? The vast majority of people are citizens respectful of law, but they have the right to the human dignity of privacy. As human beings in control of their own lives, they deserve the right to hide their wallets, bank statements, personal photographs, medical records, psychological weaknesses, or browsing histories from people seeking that knowledge. If and when human beings lose their privacy, they also lose agency to control their own lives. Instead, they become equal to cattle which can be herded into flocks and exploited by a master. This master can be a government, a large corporation, or a criminal agency - whoever has the wherewithal (technical and logistical capabilities) to snoop on the unsuspecting public. If humans lose their privacy, they are like stray animals, waiting to be reined in by a harness and branded by the hot iron of the husbandman. If the husbandman is caring, they will be milked or fleeced, but their lives will be spared. If the husbandman is vicious, they will be butchered and served on the table. In either case, they will lose their freedom to roam as they please. So the question of privacy comes down to human dignity. Do we want to treat people as humans or as cattle ?

What sets Freedombox apart from other similar software projects is that it aims to protect novice users, without any programming expertise. This includes people using home computers and smart phones to browse on the internet, share pictures with friends,  or control home appliances using web apps. To target such a general user with minimal programming expertise, any privacy technology should be as simple to use as flipping on a light switch. This is what the Freedombox aspires to be. During his presentation, Sunil demonstrates the installation procedure of Freedombox with a theatrical gesture, connecting the box to the internet cable and to the electric plug (which is all it takes for the installation), saying
This is how we provide freedom, one home at a time  
This is an amazing achievement, but I believe victory is not that easy. For a long time, the bane of free software has been its technical complexity. Using most commercial applications with GNU/Linux software has been hard or even impossible. Many hardware vendors refused to provide drivers for their devices to interoperate with free software systems. If free software is a wide open field of freedom, there existed many fences that daunted people from accessing this freedom. There has never been any doubts about the quality and efficiency of free software systems. In fact, most web servers and large-scale software systems are built on top of free software, but such systems are built by capable engineers who are not daunted by the barriers of integration. However, this situation has been changing in the last few years, spurred by projects such as the Ubuntu, which aim to reach to the general public (known in programming parlance as n00bs). Even compared to the guarded optimism of Ubuntu, what the Freedombox aims for is radical: giving the power of a veteran system administrator to a novice user, all in a box that can be plugged in and switched on. Essentially, the Freedombox destroys the fences, the one single factor that is limiting the spread of free software.

Standing on the shoulders of Skywalkers: 

In order to assess whether this project will succeed, I think we have to go back to the dawn of computing. The seeds of the computing revolution were sown in the decades before the second world war. As the war raged on and everybody was awed by the power of the atom bomb, a greater force was rumbling below - the force of computing machines. It is said that computer science is as much about computers, as astronomy is about telescopes. Computers and algorithms are a powerful tool like no other in human history. What is it that they help us to see ? In my opinion, there were three people who understood computers to the full extent of what they help us to see. I don't include the two giants: Alan Turing and John von Neumann to that list. The three people were: Claude Shannon, Norbert Wiener and Vladimir Kolmogorov. In my mind, I call them with the acronym Skywalkers, giving one syllable for each person's last name (s)ky(w)al(k)ers.

In 1943, just as the embers of the second world war were hot and most mathematical work was still classified, Alan Turing visited Bell Labs in the USA, where he crossed paths with Claude Shannon. He later mused in his diaries about the conversation he had with Shannon.
Shannon wants to feed not just data to a Brain (a nickname Turing gave to a futuristic computing machine), but cultural things ! He wants to play music to it. 
This incident helps me explain why I have chosen the Skywalkers as visionaries for where  and how computation can be applied to understand the world. In my opinion, these three scientists captured the grand vista of everything that can be analyzed in a computational and information theoretic manner: physics, biology, media, sociology, psychology, the very process of thinking itself - essentially, the entire skies. Everything appears in a more profound manner when seen through a computational lens, with all the underlying veins and sinews that make up the fabric of reality glistening in bright contrast.

I remember an incident when I was chatting with a  physicist friend of mine when somehow our conversation came to the hypothetical case where aliens are observing the earth and judging the complexity of its different artifacts. My friend thought that the most complex objects to be found on this planet could probably be the digital computers, with their micro controller chips and complex electronics. But I thought differently, the most complex artifact would be the spoken conversations between humans. There is an inherent complexity in human language that is quite extraordinary. From the perspective of a computer scientist, computers themselves are not that complex. In order to generate complex artifacts, the required computation may happen not only on digital computers, but also on biological ecosystems or in human societies. This complexity of these artifacts can be measured in an objective manner, based on the length of the shortest computer program that can generate such artifacts. We need to thank the Skywalkers for giving us the conceptual tools to do this.

Essentially,  computer science is the study of information, which is everywhere. It is the study of control and feedback, which are everywhere.  It is the study of probabilities and complexity, which pertain to everything. Skywalkers are those giants who captured this vast canvas for computers.

Newton once said that
If I have seen further, it is by standing on the shoulders of giants. 
This quotation goes earlier to the 12th century, to the Bernard of Chartres. Chartres is a central town in France that housed one of the oldest cathedrals. We will come back to this cathedral at the end of this blog. But, we can reframe this quotation to refer to computer science as follows.
If we can learn to stand on the shoulders of skywalkers, we can also walk the skies. 
There have been many great computer scientists (and indeed in other scientific disciplines) who followed the lead of Shannon, Wiener and Kolmogorov. Going with the Star Wars theme, we may think of these scientists as Jedi knights, who can control the Force, which is the underlying information pervading this universe. Unfortunately, despite the spread of personal computers and smartphones, most of humanity cannot think in computational and information theoretic terms. This makes them particularly vulnerable for manipulation.

Behavioral conditioning and cybernetic freedom: 

Of the three Skywalkers, the person who understood the most about how computers will affect humans is Norbert Wiener.  I am currently reading an introductory book for general public that he wrote in the 1950s, titled "The human use of human beings". Reading this book is like walking on a glacier, slow and painful. When I was living in the Alps, I had my share of walking on glaciers, deep crevices lurk beneath your feet and you should be watchful of everything that melts and tumbles into the bowels of the earth below. But after you cross the glacier and climb a peak, you may catch an aspect of the entire glacier as it carves through the landscape. Only then will you get a grasp of its immense physical power. The ideas of Wiener are like such a glacier, cutting through rocks and boulders, across fields as diverse as biology, sociology and physics.

Unlike many of his contemporaries, Wiener realized that there is a dark side for computers, even prompting him to exclaim

I agree with what Einstein used to say about the three bombs: there are three bombs. The first one is the atomic bomb, which disintegrates reality. The second one is the digital or the computer bomb, which destroys the principle of reality itself - not the actual object, and rebuilds it. And finally the third bomb is the demographic one. 
This is a stunning pronouncement made just after the destruction of Hiroshima and Nagasaki. Wiener was a pacifist and an environmentalist, and by wearing these hats, he had no trouble finding people agreeing with him about the first and third bombs. But in his day, talking of computers as a "bomb" that would destroy the very principle of reality was quite unusual. The only people who could probably remotely understand this pronouncement were behavioral psychologists, following the lead of Pavlov. Wiener explains how a machine can perform behavioral conditioning on a living creature, using the example of cattle and electric fences.

On cattle farms, the construction of wire fences strong enough to turn a steer is not easy. It is thus economical to replace a heavy fence of this type by one where one or two relatively thin strands of wire carry a sufficiently high electric voltage to impress upon an animal a quite appreciable shock when the animal short-circuits it by contact with the body. Such a fence may have to resist the pressure of the steer once or twice; but after that, the fence acts, not because it can hold up mechanically under pressure, but because the steer has developed a conditioned reflex which tends to prevent it from coming into contact with the fence at all. Here the original trigger to the reflex is pain; and the withdrawal from pain is vital for the continued life of any animal. The transferred trigger is the sight of the fence. 
Crucially, from the perspective of cybernetics, it does not matter whether the communication is between man and man, or between man and machine, or between animal and machine, or between machine and machine. The purpose of any communication is to establish control. Whether, we as human beings allow ourselves to be behaviorally conditioned by electrical fences (in various metaphorical forms) is up to us. In cybernetic terms, freedom corresponds to the control of such electrical fences that deter humanity from trespassing them. Where are these electrical fences in today's society and who controls them? It is only possible to answer this question, when one is versed in the tools of information processing.

Most people didn't understand Wiener.  John McCarthy, another great hero of mine, who went on to coin the term "artificial intelligence" (and found the scientific field) described of Wiener as a "humongous bore". Along with co-conspirator Marvin Minsky, he basically created the "new" field of artificial intelligence in order to escape the prying clutches of Norbert Wiener and John von Neumann (another giant with a massive ego). Wiener had been a terrible mentor (his mentorship of Mcculloch and Pitts, who invented artificial neural networks, is another terrible story) and he had his own issues of awful childhood, being raised in the public eye as a child prodigy by a disciplinarian father. However, by abandoning Wiener, the fledgeling field of artificial intelligence (AI) got divorced from its cybernetic roots, where machines were compared not only with humans but also with animals. The cognitive machinery of the human brain shares its core architecture with animals, and most of its functions are subconscious. The thoughts expressed in language, driven by conscious desires and motivation are only the tip of the iceberg. But this final superficial layer became the sole focus of AI, and remains so mostly even today. Between McCarthy and Minsky, Minsky gained a closer understanding of Wiener, aided by his careful study of Wiener's protégées McCulloh and Pitts. However, I believe even Minsky held an unhealthy fixation with higher level cognitive processes - those visible under conscious human inspection. There were very few scientists in the computer science community who were studying subconscious mental processes, and building defenses against them for the safety and freedom of humans.

Freedoms in Free and Open-Source Software: 

The plight of AI as it got fixated with higher level cognitive processes is not just an esoteric artifact, worrisome only to scientists and philosophers. In fact, the merry pranksters led by McCarthy and Minsky conquered the entire realm of computing technology. They spearheaded the revolution of personal computing in the 1980s. They were instrumental in defining the world wide web and its myriad applications. Most relevant to this blog, Richard Stallman, the founder of free software movement, graduated out of the MIT AI lab. It is he who defined what "freedom" means with respect to the use of software: the freedom to use, the freedom to inspect the source-code, the freedom to modify it, the freedom to share it and so on. These freedoms are necessary for the human user of a software to be the master of his own control, but they are not sufficient by themselves. True, if these freedoms are absent, whoever controls the software also controls the user using it. But even if all these freedoms are present, the human user can still be under the control of the software provider. This is because of the simple reason that the human user is unaware of his subconscious mental processes. In order to achieve true freedom, what need to be revealed openly (in a human readable format) are not only the software code, but also the hidden mechanics of the human cognitive processes while the user is interacting with the software.

In the earlier days of personal computing, the functionality and specification of software could be given in a small finite description. This functionality was the same for every user testing the software. So in principle, even though the average user is unaware of how the software is gaming his subconscious brain, it is possible that somebody exists with jedi powers of information processing, who can analyze the entire chain of human software interaction. However, this has changed dramatically in the recent years. Nowadays, by accumulating a lot of data, a software system evolves a personalized model of the human user and reacts to him in a unique personalized manner. Consequently, the reaction modalities are unlimited and cannot be tested by other users. This is particularly true of cloud-based software that hides the data collected from the users in a distant database. But even if the raw data is available for inspection, the average user may not have the computational capacity and the education in information processing, to understand how the software is curtailing his freedom.

Beyond free software, there exists the paradigm of open-source software, which does not support all the freedoms listed by Stallman. This paradigm grew out of the work of Linus Torvalds and others, who argued that the freedom of "source-code visibility" is sufficient for most purposes. Many large internet software companies, such as Google and Facebook, are releasing the source-code of certain software projects. This is welcome, but I believe it is still unclear whether this dumping of software source-code is helping the freedoms of users. In fact, large computer systems today use tons of user data, that are locked in the proprietary file systems and databases of these companies. It is more this data, rather than the software source-code itself, that predicts the behavior of a software. Without this data, I do not think that a machine learning algorithm can be considered technically even "open-source". In this way, these public releases of software source-code are short of even the definition of Linus Torvalds. What we have instead is a phenomenon similar to viral infection. Due to the limited size of the ecosystem of programmers working on a particular field, the public releases of software source-code are an attempt to monopolize these ecosystems. What results from this infection is an early stage of technology lock-in, before the technical field is even matured. This is not a new phenomenon, we can argue that Unix and C programming languages are themselves artifacts of technology lock-in. But the consequences of technology lock-in in today's era of big data will be far more severe.

Finally, I want to talk about the tremendous pressure on scientific researchers to release the source-code of the software systems they used for conducting their experiments. I think it is admirable when a scientist releases source-code, but this cannot be imposed. Often, the progress of science depends on personalized investigation and passions of a scientist. The scientific community needs to tolerate dissenters with diverse perspectives, and create avenues to build a common field of study from multiple viewpoints. Often, it is necessary for scientists and research institutes to build credibility by publishing a series of results, before they can release their methods or software. If they release their secret sauce before they achieve due recognition, their unique perspective about research will be submerged in the din of the majority. Essentially, if the software is not even released (freedom 0 - of using the software is not given), there is no question about violating other freedoms of source-code inspection etc. On the other hand, releasing source-code without releasing data is a very sly game.

Education of a new generation of computational citizens: 


We are still in the very early stages of the computing revolution. In terms of impact, the only comparable inventions are that of writing and printing. Both these technologies are revolutions in how information is organized and processed: they tremendously affected all aspects of society: economics, law, health, education and government. Overall, these effects have been resoundingly positive. But the society needed a couple of centuries to absorb the aftershocks of these technologies. The domain that concerns us the most is that of education, as this is where future citizens are forged.

When writing was invented, it facilitated the externalization of memory: helping people to think and reason about concepts and ideas that are too large to fit in a brain. The pessimist may have said that people needed to remember less and this would corrupt the faculty of  memory (such an opinion was voiced by Socrates), but indeed the opposite happened.  New methods for expanding memory, such as the Method of Loci were invented at the same time as writing. This story of the art of memory is recounted beautifully in the book of Joshua Foer "Moonwalking with Einstein". In the long run, writing as a technology created a new way of doing science as well as politics.

Something similar has happened with printing. This greatly expanded the reach of knowledge for ordinary citizens. The pessimist may have said that with so many books being printed, the quality is dwarfed by the sheer quantity, but the quality actually increased. New methods were invented on how to judge the quality of books, and how to quickly scan large passages of text. The writer Nicholas Carr argues in his book "The shallows" that the practice of quiet contemplation while reading a book encouraged other mental processes dealing with higher cognition (engaging the pre-frontal cortex of the brain). The age of enlightenment was a direct result of this expanded consciousness of people, as well as a greater tolerance and empathy for the perspectives of others.

With computing, we simply did not yet experience such a revolution in education. I think Alan Kay, the inventor of object-oriented programming (along with a whole bunch of other ideas) expressed this most clearly. He says that nobody, except in scientific fields, uses computers for what they are good for. Everybody else is using them to mimic old media : books, television, movies etc. What we need to cultivate is a new type of thinking: based on a computational and information theoretic perspective, what I mentioned earlier as "Standing on the shoulders of Skywalkers". A person who is able to think computationally will not need protective barriers that treat him like a child. He will be conscious of his cybernetic freedoms as a fully educated citizen. No powers - either in the government, or in the greedy towers of finance, or in the greedy dungeons of crime - will take him for granted. But to get there, we need new programming languages and new paradigms of computing. We need new methods of sharing data and accessing it computationally across the public. What we need are freedoms far greater than envisioned currently by the movements of free software and open-source.

If the average person is able to program computers and regularly analyzes information through computational lenses, this will produce physical changes in the brain, similar to how the quiet contemplation of reading a book changed the prefrontal cortex. I am optimistic that when this happens en masse, we will build a society that is far more cultivated and compassionate than we have today. In fact, this might be our only salvation for the ecological and environmental catastrophe that we are hurtling into. But we don't know how long this will take. Alan Kay says that this may take a couple of centuries. He likens himself to people laying bricks to construct a cathedral, like the cathedral of Chartres in France, that took several generations to finish. These cathedrals were not only wonders of architecture, but they also housed large libraries where the scholarly works of Arabia, China and India, as well as ancient Greece and Rome, were studied by European students. This sparked the period of renaissance.

I hope we will see a similar computational renaissance in a future not too distant from now.




Alan Kay's interview: Go to 6:20 to get his message on the culture of computing. But watch the entire interview for greater wisdom. 

PS: The title of this post is a reference to John Steinbeck's novel "Of Mice and Men". In this novel, there are two characters George and Lennie, who can be interpreted in many ways. The conscious and self-reflective George refers to the intelligent programmer who has a deep desire for freedom and who wishes good for the society. The kind-hearted brute Lennie can refer to the uneducated citizen, who is given access to the vast powers of computing. He may also refer to computation itself personified. There are easy references to be found for other characters in the novel, each with their own greed and insecurities. The novel has a sad ending - George shoots Lennie in order to save him a more painful death. This is my greatest nightmare. I hope the future does not hold such an end for computing. I hope our generation of humanity will avoid this end.