Sunday, February 07, 2016

A comparison of Confucian-Daoist dichotomy with the Vaishnavaite-Shaivaite dichotomy : A paraview of Edward Slingerland's "Trying not to try"

The analogy of a chariot for the human mind: (Left) The charioteer in Plato's Phaedrus controlling the good and bad horses (Right) The chariot of Arjuna in Mahabharata, driven by Krishna

Chariot of the mind: 

Plato compares the human mind to a chariot. In his dialogue Phaedrus, Plato speaks of a person's intellect as a charioteer driving a chariot pulled by two horses. One of the horses is of a noble breed - representing the positive aspects of a passionate nature. The other horse represents irrational passions of the body that pull the chariot in the wrong direction. It is the job of the charioteer (the intellect) to perpetually control the chariot and guide it towards enlightenment. This distinction between the cold reasoning intellect (the charioteer) and the hot passions of the body (the horses) may be rooted in the biological nature of the human mind. There are separate neural circuits in the brain - basal ganglia and the limbic brain for fast responsive action (like the horses), and the cortical regions with the anterior cingulate cortex for providing reason and feedback (like the charioteer).

A very similar analogy of the mind as a chariot is present in the Katha Upanishad,  an ancient Indian philosophical text, but with some subtle and interesting differences. They illuminate the distinction between a Platonic worldview  and the Samkhya worldview. The Platonic and Samkhya systems greatly influenced the further philosophical development in the west and India respectively, so it is interesting to look at these differences. The Samkhya system divides the physical nature (Prakriti) into 5 layers of reality:

  1. Annamaya - the inanimate layer e.g, rocks
  2. Pranamaya - the layer of breath e.g, plants 
  3. Manomaya - the layer of mind or sensory-motor control e.g, animals
  4. Vijnanamaya - the layer of intellect or linguistic understanding e.g, human conversation
  5. Chinmaya - the layer of ego or historical self e.g, the memory of humans 
The suffix maya that is used for each of these layers denotes the fact that these layers are measurable and objective. One interesting consequence of the Samkhya system, that is of relevance to artificial intelligence (AI), is that if something is measurable and objective, it can be replicated in a computer.  However, it is not exactly equivalent to the modern reductionist view of the mind. Samkhya accommodates an aspect of mind called Purusha, that can be loosely translated as the experiencer or the unchanging self. This is considered to be beyond and separate from the 5 layers of nature (Prakriti). Unlike Prakriti, Purusha is considered immeasurable and unchangeable.  Many translators have used the word soul to refer to this, but as we have seen, many aspects of the Platonic soul are already described in the 5 layers above. So, the Purusha can be understood as a dramatically reduced  aspect of the soul, beyond all the realms of action, movement or change.

Katha Upanishad follows the lead from the Samkhya system and describes the chariot as follows: the unchanging self (Atman or Purusha) is the lord of the chariot. The intellect (Buddhi) - composed of linguistic understanding - is the charioteer holding the reins. The reins are the mind (Manah) - referring to the sensorimotor control in the brain. The horses are the senses (Indriya).  The paths ahead are the objects of the senses. It is the job of the intellect to drive the horses into the right path, such that the chariot is led to enlightenment.

A very interesting modification of this analogy of the chariot occurs in the parable of the Mahabharata war. In the epic of Mahabharata, the character of the hero Arjuna is a metaphor referring to the philosophical concept of the conscious mind. His chariot is driven by Krishna, who is not any ordinary charioteer. In fact, there are many characters in Mahabharata that are paragons of intellect - Vidura, Bhisma, Drona etc. Probably the best personification of intellect and wisdom in Mahabharata is Yudishtara - the elder brother of Arjuna. But the chariot of Arjuna is driven by none of them, but by Krishna - whose character is a metaphor to represent the cosmic order of the universe.
What does it mean to have one's intellect replaced by the cosmic order ?

 I think we can appreciate this analogy much better by looking northwards from India, and learning about the philosophical history of China.

I recently read a fantastic book by Prof. Edward Slingerland titled "Trying not to try", that narrates the various philosophical debates in ancient China. I will give a paraview of this book in my blog, connecting it with my perspectives as a computer scientist and as a person versed in Indian mythology.  I am also bringing some perspectives from Prof. Slingerland's excellent MOOC course "Ancient China meets modern science", which I am taking right now.

The central concept of Chinese philosophy is a mental state known as Wuwei (pronounced ooh-way) that can be loosely translated as "effortless ease". A related, but not completely identical idea is the mental state termed as "flow" by Hungarian psychologist Mihalyi Csikszentmihalyi. A sportsperson or a musician is said to be in this state of flow when their conscious brain is switched off and their expert movements are achieved by completely unconscious control. In ancient China, the notion of wuwei went beyond physical expertise and denoted a type of mental and spiritual dexterity. Great philosophers like Confucius, Mencius and Zhuang Zi developed theories on how to achieve this wuwei state.  Unlike western (and to some extent, Indian) philosophy, wuwei brings the notion of salvation to the very present, and connects it to the success of both material and spiritual pursuits.

Why bother with comparative religion ?

I once listened to an interesting podcast on the role of synesthesia in different religious traditions. As often happens in the discussions of comparative religion, this discussion was limited to the Abrahamic traditions of Christianity, Islam and Judaism. In the study of religion, discussions about how a religious observer experiences his spiritual life are grouped under the term phenomenology. Since the religious experience is inherently subjective, one can argue that it is beyond the ambit of science. This is probably true for theoretical topics such as theology and religious ontology, but phenomenology offers an interesting glimpse for science into the religious mind. Despite the obvious differences in religious ideology between the Abrahamic religions and Buddhism / Hinduism, the phenomenology of what is reported by the mystical people in these various religions are quite comparable. But mystics are a rare breed in any religious tradition. Most people are concerned with improving their lives in the here and the now. In this regard, of all the great religious traditions, I think the Chinese religions are the best in bringing mystical phenomenology and secular lives into a common analytical framework - through the notion of wuwei. Since China was not colonized to the same extent as other cultures such as India,  the study of Chinese religion is also relatively unimpeded by prejudice. 

Thus, in our modern age and era, it is illuminating to study what Chinese philosophers had to say to achieve a harmonious state of mind. In the fields of psychology, computer science and cognitive science, the shadows of Plato and Freud loom large in the subconscious of the researchers. Alternate  viewpoints from China or India may be considered as means to balancing this hidden bias. The analogy of a person's mind to the chariot is especially useful in developing computer software and cybernetic systems, as ultimately they are supposed to serve a human user (the lord of the chariot).

Hot and cold cognition in the human brain: 

The human mind is an engine that runs on at least two gears - hot and cold. This is probably the case for any complex system that has to operate in the real world, with strong constraints on the amount of time available to take any decision. Hot cognition is easier to understand. It is characterized by rapid instinctual reaction to a stimulus from the environment: to escape from danger, to seize an opportunity, to navigate through obstacles etc. Delay in response to such environmental stimulus will be a matter of life and death. Such instinctual response to the environment can be seen throughout the animal world. In humans as in other animals, this behavior is coordinated by the sub-cortical brain areas in the limbic system dealing with emotions and spatial memory. But despite this outward similarity, the human limbic brain is significantly advanced. In particular, it exploits the extra storage and computations in the cortical areas, which can be considered as offshoots of the limbic brain. In any case, hot cognition in both humans or animals is characterized by rapid response to stimulus.

In contrast, cold cognition may be a uniquely human trait. This is performed by slow and rational thinking, where all the alternative hypotheses are explored by the brain in order to take the optimal decision. Cold cognition is the reason why we have science and culture. But rational thinking is computationally expensive and needs a large window of time. Typically, it involves examining and overcoming our structural biases about the world, and this might generate emotional turmoil. At the least, cold cognition requires us to plan deeper into the future, than what can be immediately seen and felt by the senses.

In any real world situation, the human mind is constantly confronted by a choice - whether to rely on hot cognition or cold cognition, to take decision. Most often, this question is not felt consciously, and  is resolved using the cheaper computational apparatus of hot cognition. But in other times, the conflict appears centerpiece in our conscious mind and plays out as a battle.

Should I eat this ice-cream, or should I stick to my diet ?
Should I steal this money, or should I return it ?
Should I prepare for my exams, or should I watch a movie ?
Should I donate money to this charitable cause, or should I keep it for myself ?  

We speak of this conflict as between the heart and the mind, or between the body and the mind, or between emotions and the reason, or between the horses and the charioteer. Resolving these conflicts in favor of cold cognition is not easy, and requires computational resources. Psychologists term this as cognitive overload or ego depletion. Each person has a limited reservoir of mental capacity for voting in favor of cold cognition, and if this reservoir is depleted, will make cognitive mistakes. The structure of the brain responsible for this cognitive control is the ACC (anterior cingulate cortex). The psychoanalyst Sigmund Freud had a dismal view of human condition that is torn between this eternal battle between the hot and cold cognitions, which he termed as the id and the super-ego. He wrote that culture and civilization condemns humans to be eternally in this state of Unbehagen (queasy, like a bad stomach upset). Freud moaned that if only the reins of civilization were torn apart, human passions would run loose like free horses and engage as they please in rape and murder. This pessimistic (and frankly, ridiculous) view of human cognition has led to several crappy psychiatric treatments, as well as some dodgy speculation in cognitive and political sciences.

This is where ancient Chinese philosophy may serve as an effective antidote. Like Freud, the sage Confucius realized that there is a battle in the human mind between cold and hot cognition. However, he thought that this battle can be resolved happily. Similar to how a rock is carved into a statue or how a block of wood is carved into a musical instrument, Confucius considered that it is possible to carve the hot cognition into complete alignment with the aims of civilization. With sufficient training, he considered that a person will become naturally and effortlessly good: perfectly dextrous, perfectly compassionate and perfectly courageous. This is a glorious and optimistic vision of human nature, even though it realizes its inherent limitations. I think the Confucian worldview has more support from objective evidence than the Freudian worldview. People don't become rapists and murderers overnight if the lights of civilization get turned off. In fact, during natural catastrophes and disasters, people overwhelmingly help each other and get together as a society. Confucius would reason that this happens because of continuous training of the human hot cognition by living in a civilized society. He valued the importance of daily rituals that show propriety and kindness. He valued a holistic education - that not only teaches students to be effective craftsmen or soldiers, but also instills in them a love for civilization and culture. Thus, arts, sports and music were considered an important part of Confucian education. When the society and culture are organized such that the daily lives of its inhabitants are bathed in a language of rituals, Confucius argued that human virtues such as compassion and courage will be downloaded into the hot cognition, as effortless to perform by a person as an expert musician playing an instrument. This effortless ease is known in Chinese as wuwei.

Wuwei and the paradox of flow:

But is it really possible to achieve a state of wuwei by rigorous training ? Before we look at the broader philosophical definition on goodness and wisdom, we can analyze how the state of wuwei is achieved by sportsmen and athletes.  Every athlete wants to be in the zone and does his best to maintain this zone. But as any athlete can tell, this is not easy. In particular, thinking consciously about how one is playing is disastrous for performance. For example, in tennis, consciously observing the bodily movements of limbs is a recipé for missing the ball. So, a good coach of tennis will desist from commenting on the specific bodily movements of the player, but will give suggestions on how to improve the focus on the game.

In a similar manner, thinking consciously about wuwei will prevent the person from achieving it. In this manner, it is similar to other human unknowns such as how to fall asleep or how to impress a romantic partner. Any conscious artifice or trick will ruin the goal, especially if the person at the other end becomes aware of the trick. When we evaluate the virtue of a compassionate act or that of an artistic performance, we will be displeased if we know that this act is framed or set up as a plot. In Nicomachean ethics, Aristotle says that unlike a craft, a virtue is to be evaluated not just by the final object that is produced,  but by the intrinsic process that produced it. Thus, to be virtuous, a person has to be inherently good and not just fake it. In this regard, wuwei is particularly interesting because the performer and the evaluator are one and the same person. One cannot fool one's own consciousness.

Due to the central nature of wuwei, all Chinese thinkers have grappled with the problem of how to get into this state without consciously trying to get there. The answer of Confucius was to mould the subconscious of the mind, through daily rituals and habit, as well as through signs and symbols that lurk in the environment. But the problem is that these rituals and signs will force the person to be self-conscious, to be aware that he is trying to get into wuwei. This is similar to how a person can ruin a date by reminding the partner ostentatiously, and through every sign and action, that it is a date. This a paradox that cannot be resolved.  Other Chinese thinkers, known as Daoists, took the directly opposing position from Confucius, saying that one should not try at all and just go with the "flow". They systematically went about deconstructing the cultural artifice that Confucius upheld.

Confucian-Daoist dichotomy:

Three laughers at the tiger ravine: Hui Yuan, Tao Yuanming and Lu Xiujing realize that spiritual purity cannot be measured by artificial boundaries

To a novice, Daoist philosophy sounds like environmentalism or nature religion. The book of "Dao di Ching" extolls the virtue of the "uncarved block". It uses a mythological character - the old master (Lao Zi) to contrast the message of the well known master Confucius. The Daoists argued that, by propping up artifice and cultural rigidity, Confucians are doing everything possible to prevent people from getting into the natural state of wuwei. At the core of the debate between the Confucians and Daoists is the very nature of the human childhood: is this something that is inherently good or bad ? Is it something that needs to be preserved or is it something that needs to be sublimated by culture ? Confucians would argue that the wild nature of man, as present in his childhood, is something sub-human. They wanted to refine this nature through the tools of culture. Daoists would argue that this core human nature is the very best that there is. Their mission was to rediscover this original purity by taking apart the artificial notions of cultural rigidity.

Who is correct here ?

This is a debate that is not resolved to this day. During the European renaissance, this debate played out between Hume and Rousseau. The ideals of romanticism not only gave us great works of philosophy, but also great works of art and music. Poets like Yeats and Wordsworth would not have existed without this shift to romanticism in art. In the United States, this debate was spearheaded by figures no less than Thoreau and Emerson.  These debates reverberated throughout the world: Gandhi was deeply inspired by the romanticists and his philosophy was essentially a call to return the world to its original state of harmony with nature. But the romanticists did not win the debate conclusively.

Today, one of the sharpest voices trying to undermine romanticists is that of the cognitive scientist Steven Pinker. His tome "Better angels of our nature" is essentially a war cry against romanticism. This book argues through myriad figures and numbers that there is nothing to be salvaged in the wild state of nature and that all human good is a product of an organized civilizational state.  Confucius would approve of Pinker, although their philosophical outlooks are not exactly identical. Notably, Confucius would scoff at the reductionist Freudian mindset that does not find value in hot cognition (or in ancient ritual, for that matter). But what they both share is a disapproval of human childhood. Pinker spends considerable amount of time in his book, arguing how human children are riddled with jealousy and mean behavior. He is following in the foot steps of Jean Piaget, who argued that children do not have a theory of mind in their early years and cannot understand the notions of compassion. However, Piaget may not have the last word. The more recent scientific work on child psychology, described by Alison Gopnik in her brilliant book "The philosophical baby", argues that children have innate compassionate behavior, and that in some ways, their consciousness is superior to that of the adults. Daoists would approve of this research.

Essentially, all philosophical debate can be understood as a dialectic between structuralists and deconstructionists. The structuralists want to build an artifice of civilization and the deconstructionists want to dismantle this and return the society to a state of childhood. In western philosophy, the earliest such dialectic was between Aristotle upholding the structuralist position and Diogenes deconstructing it.

But can we find the complete answer by blanket judgements to one side or the other ? After all, achieving harmony with nature or achieving a state of wuwei are inherently paradoxical quests. The interesting thing about ancient China was that the philosophical innovations rubbed off from one side of the debate to the other, producing an enriching dialectic of thoughts. An important philosopher that refined the Confucian strategy was Mencius, who argued that people have natural sprouts in their soul that can be cultivated into full-scaled virtues. He used the analogy of a farmer cultivating rice sprouts - planting them at the right spot with adequate water and sunlight, but then waiting patiently for them to grow fully. In this way, he accommodated the Daoist argument that human nature is inherently good, but still argued for the centrality of Confucian ritual and learning. Another important philosopher on the Daoist side of the debate is Zhuang Zi, who deconstructed the very romanticist attitude of the early Daoists. He argued that trying to consciously go towards a primitive mode of living is as foolish as consciously following the Confucian ritual: neither would bestow wuwei on the person. Echoes of this debate later reverberated between the zen Buddhists in Japan.

The reason why ancient China achieved such a fruitful philosophical exchange was that neither side of the debate was oblivious to the essential paradox at the heart of wuwei. They saw the value of engaging the other side in the debate. Many Chinese philosophers had close friends who believed in other philosophical paths, but with whom they conversed regularly. This debate continued to flourish   when Buddhism was embraced in China. One of the best examples of this philosophical fluidity between various schools is the story of the three laughers at the tiger ravine.

I am not sure if we have such relaxed attitudes today, even in our scientific communities, about respecting alternate viewpoints and holding philosophical exchange. The western religions have historically suppressed alternate viewpoints. Modern scientific method has weekend these prejudices, but did not abolish them completely. As I argued earlier in my blog, we are often quite reckless in how we think using negation. This thinking is particularly problematic when dealing with topics of an inherently paradoxical nature. In the following, I will present how a philosophical debate quite similar in nature to Confucian-Daoist dichotomy was conducted within Indian culture, which used different strategies to dealing with the paradox.

The duality of Vishnu and Shiva:

Indian culture and religion ultimately stem from the philosophy of Samkhya, which  posits a duality between nature (Prakriti) and the experiencing self (Purusha). It argues how any duality that we observe in whichever situation of life ultimately stems from that root duality.  As I explained in the begining, nature (Prakriti) can be observed in 5 layers, with the higher layers dealing with subtle concepts like intelligence, memory and ego. All of this is considered to be within the realm of objective measurement. In fact, the word Samkhya refers to enumeration. Everything in nature (Prakriti) can be numbered and measured. The residual beyond measurement is called Purusha, and it is described as unchanging, eternal and unmovable.  In contrast, nature (Prakriti) is ever dynamic and metamorphizing between different forms. The mechanics through which different objects in nature transmute from one form to another is given by the 3 Gunas (qualities): Satvik (self reflection or renunciation) Rajas (aggression or growing) and Tamas (inertia or destruction). I described them in greater detail in an earlier blog.

On the topic of human culture and civilization, the gunas of Rajas and Tamas are relevant, which are symbolized at their subtlest level by the deities Vishnu and Shiva respectively. In the Samkhya system, the deities are entirely naturalistic (belonging to Prakriti), and can be loosely understood as programs running on a cosmic computer. Of these two deities, Vishnu preserves culture and Shiva destroys it. Thus, they split very neatly into the Confucian and Daoist camps. Indeed,  Vaishavaites who worship Vishnu extoll the values of culture, where as Shaivaites who worship Shiva deconstruct the value of culture. From a historical and anthropological point of view,  Vaishnavaites had been the most resistant groups in India to changing cultural norms and practices. Many powerful Indian kingdoms followed in the Vaishnavaite mould and aimed to establish order in accordance with civilizational norms. In contrast, Shaivaites were often ascetics and revolutionaries, residing in the wilderness and in the margins of the society. The most extreme Shaivaites are known as Tantrics or Aghoris, who reside in burial mounds and eat rotten flesh. They do this in order to deconstruct their mental constructs  of society and nature. They are India's counterparts to Zhuang Zi's howling sages of the mountains.

There are many myths and stories in Indian culture that elaborate on this dichotomy. But the interesting thing is that the mythologies of Vishnu and Shiva are deeply interwoven with each other. So neither the Vaishnavaite nor the Shaivaite tradition stands on its own, and needs to evoke mythological imagery from the other camps to tell the stories. For example, Vishnu carries a conch shell that symbolizes the wild nature of wind (an attribute of Shiva). Shiva has a son who is the leader of the solar deities (which include Vishnu). So when mythography encodes a cryptic tale of how to achieve a mystical state of mind, it invokes the attributes of both Vishnu and Shiva. This corresponds to the paradoxical nature of these mental states, which cannot be achieved by trying consciously. Sometimes, one has to follow rules and norms. Sometimes, one has to deconstruct them.

This philosophical dialectic on the values of ritual and norms stems from an even earlier period in India. The first division was between the Brahmanas (who praised the Vedic ritual) and the Shramanas (who deconstructed the ritual). The religions of Buddhism and Jainism sprung from the Shramana tradition. Shaivaite and Tantric asceticism also stem from the Shramana tradition. But the interesting thing with  each of these religions is that, once they start to gain a significant number of followers, they had to seriously engage with the opposing philosophical positions. For example, Buddhism split into various camps based on how the ritual was valued. Even as the religions split and multiplied, the mythographical imagery developed by these schools was frequently borrowed by the other camps. As Confucius would argue, this mythographical imagery is like an environmental backdrop that helps the subconscious to be drawn into a state of wuwei. Unlike dry logic that suffers from limitations of expressibility in dealing with paradoxical concepts, mythography provides a cultural language to reason between them.

We can consider the analogy of the chariot to the human mind in a similar manner. Here, the symbol of Krishna (an avatar of Vishnu - the preserver of cosmic order) is used to express the state of effortless ease that is achieved by a person while doing an action. During the Mahabharata war, Krishna instructs Arjuna on how to achieve this effortless ease while performing difficult tasks - these instructions were written down into an independent sacred text known as the "Bhagavad Gita". Krishna asks Arjuna to not consciously desire the fruit of any labor, but to follow a path of desire-less action. If his actions are in accordance with the cosmic order (termed as Rta in India, or as Dao in China), he would succeed.

This message from the Gita is referred to in the book of Edward Slingerland, but unfortunately it is mentioned as a Daoist message. I think it is more in line with Confucius, as is most of the Vaishnavaite mythology. One important point of comparison between Vaishnavaite and Confucian ritual is the nature of timing for the ritual.  Similar to the Vaishnavaites, Confucians had specific rituals for the various seasons as well as for astronomical events (alignment of stars and planets). But unlike the Vaishnavaites, they did not have rituals corresponding to the time of day. In ancient India, the sunrise and sunset were central to the anchoring of the ritual, where the people were instructed to salute the Sun. The outer sun is considered a manifestation of the inner sun, and the supreme head of all these solar deities is Vishnu. Thus, it is natural for Vishnu to retain the role of the preserver for  ritual and culture.

In contrast, Shiva descends from the wild deities of wind (Rudra), who destroy culture and civilization. Shiva is termed as the destroyer of the three cities: the physical world,  the mental world and the sensory world. An important form of Shiva is Pashupathi - the lord of animals, which shows the sacred symbolism of the primitive aspect of nature. Like the Lao Zi, Shiva was also termed as the ancient one. In all these aspects, the mythography of Shiva aligns well with the Daoists.  The abode of Shiva is considered to be the Manasa Sarovar lake, which resides in western Tibet. This leaves open the tantalizing possibility that both the Daoists and Shaivaites have a common cultural point of origin. In any case, exploring this common philosophical bent may help deconstruct the worldview that we now take for granted, but which mostly excludes Indian and Chinese philosophy.

Manasa Sarovar lake: The abode of Shiva

Monday, January 25, 2016

Homebrew AI Club

Last Saturday, I had a great chat on Skype with Samim and Roelof - two very cool guys who are planning a sort of machine learning revolution. I met them on Twitter, where I came across their   "Ethical Machines" podcast. This is a remarkable podcast that I highly recommend to anybody who is interested in AI, computers or the future of society.  Unlike the regular commentary on mass media, whose portrayal of AI is often a one-dimensional caricature of the question "How far are we from the Skynet in Terminator ?", this podcast gives a nuanced understanding of everything about AI. This is because Samim and Roelof are both active programmers and researchers, who follow and shape the most recent trends in AI. But this is not just a technical podcast. It is also about the ethics and politics of AI, as well as about art and culture.

I will give a small profile of Samim and Roelof. They created Gitxiv, which is a nice mashup of Arxiv and Github - two places on the internet where scientific researchers post preprints of scientific articles and the source-code of the projects. Gitxiv is a place of combining these two and more. As compared to various scientific communities, the machine learning community is remarkably open and sharing, with even big companies making the source-code of their projects available for the general public. Gitxiv is a place to bring all these energies together. They are soon planning to extend this venue to share data sets - an important element for reproducing scientific results today.  I feel Samim is an artist at heart, he is interested in AI because it opens a new frontier in creative exploration. He is also a sharp thinker and entrepreneur. He is based in Berlin, which is probably the most interesting place in the world for a person with this combination of qualities. Roelof is a Ph.D student in Sweden, working in the area of natural language processing. But he is also a political activist with a  strong understanding of social activism. In my past, I have briefly participated in activism about free software, though this is nothing to write home about. But I understand how incredibly enriching it is to do the basic ground work in social activism and to engage with people. So Roelof has some great perspectives out of his experiences.

Together, Samim and Roelof are a great team and they both share a passion in democratizing AI technology to the masses. I ran into them through making some snarky comments on Twitter, with my typical pessimism about the future of AI. Especially over the past one year, I grew very critical. I wrote many critical articles in this blog about the progress of AI and about the increasing despondency of our society's future with it. This is rather depressing, as I consider myself to be an AI researcher at my heart. There is a classic trope in PhD-Comics about the "bitter post-doc" - I probably fit this bill very well. When I speak with younger researchers and students, I have to consciously work on what I say so as to not depress them completely.  But my pessimism about AI has less to do with technology, and more to do with the single-minded "Skynet" narrative that our society is building for it.

Speaking with Samim and Roelof literally lifted my spirits up, at least for a brief while. Samim is a very optimistic guy who sees the silver lining under any cloud. Right in the beginning of our chat, he pointed out that we need more narratives on how we tell the AI story. He pointed out that  there are already several positive ones - the ecological perspective, the global consciousness perspective, the young entrepreneurship perspective. Even with my bitter pessimism, I couldn't deny that ! Our chat then became about how we can build on these narratives. We remembered the glorious days of the personal computer revolution in the 1960s and 70s - in some way, we are all fans of the great pioneers like Doug Engelbart, Alan Kay and so on.  At the same time, we are conscious of the great consolidation going on in the market, where huge data clusters are condensing under the rule of monopolies. I talked about the consolidation going on in the visual effects and creative industry, damping some of Samim's hopes that this might be a way out. But Samim is too optimistic to ponder on my negatives. Roelof added a very interesting point - about how consolidation is going on in the sphere of academia and universities. He pointed out the creation of the University of Amsterdam - merging two historically separate universities (religious and secular) into one, and then the creation of commercial research centers within the premises of the university, where students have to sign an NDA before stepping in. This is shocking news to me (well not really, what did I expect !?), but we bemoaned how the Netherlands - which historically had a liberal tradition, that saved European culture from death during the aftermath of the printing press revolution - is no longer as much a  defender of the free culture as it used to be.

We then talked about how, or if it is even possible, to replicate the personal computer revolution in the sphere of AI and machine learning. What we need desperately today is a "Homebrew Computer Club" - the rag tag band of losers, programmers and nerds, which in the 1970s took on the grand big monopolies of Xerox and IBM. I mentioned that this has to be a popular movement, not limited to the elite set of programmers and researchers, but inclusive of all sections of the society: the young kids at school, old people, and especially, artists and the creators of culture. We need to get those guys and girls who define the "quintessence of cool". AI and machine learning is something everybody should claim ownership on. But how ?

A few weeks ago, I visited the Paris Machine Learning Meetup - hosted by the brilliant blogger Igor Carron and his co-conspirator Frank Bardol. I talked about virtual faces and Leonardo da Vinci - as  random a mix up of ideas as you can imagine. I have been thinking of going to this meetup for a long time, mostly because I liked its logo, which I show below.

 This logo talks about giving power to the data, but it is really using a revolutionary image of "giving power to the people".  This reminded me of the stories from the early days of computing, where Ted Nelson published his legendary book "Computer Lib / Dream Machines", which also uses the same iconography.

This crazy book used to be the bible for the Homebrew Computer Club. I haven't seen this book, but finding and owning a copy of this is one of my life's missions. This book kickstarted the idea of democratizing computer technology to the masses, much before anybody has seen or heard of a personal computer. What we need today is a seller of such dreams about machine learning for the masses. This includes machine learning for your grandma, your dog, and your street artist. We cannot afford to bind machine learning in the prison of researchers, elite programmers and mathematicians. The only hope to save our society from descending into a totalitarian state is by democratizing AI. But how to do that ?

The meetups sprouting in all major cities of the world show a path forward. They widen the audience from researchers, engineers and managers of fat-wallated companies, to something broader. But there is a long way to go before we get your grandma, your dog and your street artist to get interested in machine learning. Alan Kay thought of computer programming as a medium, something as simple to use as a book. But we have not got there yet. Even bonafide computer programmers don't think of programming as a medium, they think of it as a skill to show off. In reality, programming should be as trivial to do as speaking a language: obviously requiring some training, but something that one can do without conscious effort. We have a long way to go till we get there.

Even though I grew up as a quintessential nerd and studied computer science in various universities, I never understood what computers were all about, until I came across this lecture by Michel Serres in 2007. The French research institution INRIA, where I was working as a doctoral student, was celebrating its 40 years and invited the philsopher Michel Serres to be the keynote speaker of the function. His talk was about a point that is so simple that it blew my mind away: computers are not tools for solving problems, but tools for solving people, who in turn will solve the problems. In other words, all the fantastic applications of computers and the internet are just a side-show to something much bigger: like tiny ripples of water on a tsunami. Very few people understand the real impacts of the computing revolution, because they need to imagine this from a perspective of a "changed brain" or a "new self", not from the present self. But there is a catch - before the computing revolution can catch on and make its true impact, it has to engage with the vast majority of the society. It should not be limited to just the elite few programmers and researchers.

It is not Michel Serres who first articulated this vision, but Marshall McLuhan. This greatly inspired early pioneers  like Engelbart  and Kay. An even early expanse of this vision is from Vannevar Bush, who wrote the essay "As we may think" at the end of the second world war.  In physical terms, a computational way of thinking would rewire the human brain, expanding the higher cognitive functions, as well as those dealing with compassion and empathy. In other words, we will become a better species through the practice of computational thinking. I think the early dreams of the computing pioneers have largely failed. We stand  today in a desolate moonscape of parched desert, where the vast majority of human population live in a prison of apps and trivial status updates - what Alan Kay once reminisced as Henry Thoreau talking about the implications of the transatlantic telephone cable, "that it will help the vast majority of Americans to know about the latest fashion statement of a European princeling". This pessimism about human nature aside, I think there is a fundamental reason why the early dreams of computing pioneers have failed - the lack of useful applications to engage the user to the full potential of computing. I think this is now beginning to change with machine learning.

In the early days of personal computing, Alan Kay and colleagues have made little children draw on the computer screens and play music, and used this as a basis for establishing the principles of computer programming. As a child drew a picture on the equivalent of a "MS Paint" like program, they romanticized that the child was "programming". By the way, "MS Paint" (And Apple Paint, or any of the other clone demos) is a trivialized corruption of the original ideas behind this demo at Xerox Parc, which indeed had educational value to teach many computing aspects. But despite the best efforts of the pioneers, they did not succeed in inculcating a knowledge (and love) of programming in the masses. Today, we might be in a much better position, because we can rely on large data sets, sensors and machine learning source-code to realize far more engaging applications.

So the time for a "Homebrew AI club" is ripe. A new culture of computing can start today, where every human can be an active participant. This can take full advantage of the connecting power of the internet. But what is stopping it ? I don't know. But it may be time to put off our conspiratorial hats and believe in the full potential of the human species. It is time to get rid of the fears of the NSA, the big brother state, the lousy social networks and see the bigger picture.

I don't know how and when this massive social change will happen. But after talking with Samim and Roelof, and generally brewing some thoughts in my puny head, I have a list of points to ponder on.

Ten commandments about AI (actually, just ten talking points) : 

1) The word "homebrew" is brilliant. It is reminiscent of home-brew alcohol, which has a direct benefit on the human user, and which immediately alters the mental states, which is exactly what we need to aim for.

2) We need artists. Anything big like a social revolution will not happen due to a bunch of nerds talking about mathematical equations. We need big mojo people like Che Guvera or Steve Jobs (though less arrogance and ass-holish behavior would be nice). Heck, we need women. It is high time there are more women in computing. Most of all, we need to engage with people who have a creative spark.

3) All of us nerds need to start at home and explain technology to our families, girlfriends and boyfriends. The first thing to start explaining is probably data security and privacy. There are a monumental number of losers on the web who share their private data without even knowing about it. We should first subtract our friends and families from this group.

4) But privacy awareness and computing knowledge are not one and the same thing. There will never be a magic switch that will bring us to a privacy-respecting world. Living in the digital world will always be a battle against adversarial powers, which in the future, will only become more powerful and obscure though the use of data and machine learning. It is highly important that everybody knows how to keep track of their own data and use it for the better. But how do we train n00bs in this ?

5) An important obligation is with helping elderly people. Most elderly people are already clueless with technology - fiddling with the inner lives of TV remotes and email preferences is not for them. But everybody has a right to lead a dignified life online. Before we snark and snigger on the troubles of elderly people with technology, let's imagine how much more awful our own lives will be when we grow older. Technology will screw us million times over then (that is, if we are still alive then, and did not all disappear in the smoke of a nuclear explosion).

6) The first step about doing machine learning is collecting data. What better place to start than collecting data about oneself: one's own friend's circle, one's shopping habits, one's tax bills, one's entertainment preferences etc.. We need open-source software that helps people to collect data and organize it in a nice manner.

7) The second step is to train people to use machine learning: simple regression functions, then more complicated methods like deep-learning.. It is likely that not everybody will understand the maths behind it. But this is not as important as being able to use these methods in a regular and confident
manner -similar to how one uses home appliances everyday. I still remember the awful day when I, as a fresh graduate student in USA, put a tin-foiled sandwich in the microwave and saw it explode in fireworks. To make it all worse, this happened in front of a bunch of school kids, that I was supposed to teach about robotics ! In my entire life until then, as an engineering student in India, I have never used a microwave oven. If you grilled me about what would happen if you put metal in a microwave, I would have pondered over my physics knowledge and answered correctly that it would explode. But when I was just hungry and wanted to eat my sandwich, physics was the last thing on my head. So I completely failed as a functional user of home appliances in the USA, due to my complete lack of training. With respect to using machine learning, I think we need to train people to be first functional users of technology. Obviously, a few will cross over into learning the maths behind it. But  even if they don't do that, it is still quite okay.

8) The third step is to expand this training to a full programming language. I think most people get it wrong with teaching programming languages. Most humans don't care about the Turing-completeness of a language or a programming paradigm. They just want to get shit done. But knowing about the basics of computability and information theory will be a must for anybody in the future. So we need to train them how to think computationally - what are the types of algorithms, how do we store data structures, how do we evaluate computational cost etc. We need ways to explain this in a simple manner to everybody. We can do this. After all, driving a car is not trivial. Putting a sandwich in a microwave is also not trivial (as I found out). But with a little bit of training, people do both these tasks quite well.

9) We need a way to swallow our nerd pride. It is not easy. We all have to work on it. When you are superior to your peers in some skill that is essential for anybody's survival, you can very often feel smug about your superiority. I don't know if it is humanly possible to not think otherwise. But being a smug monkey is not the point. Evolving from monkeys to humans is our goal.

10) We need to enlist doctors and the medical profession for our help. Of all the various disciplines, I think clinicians have the most respectful view of human potential. Every day, they see broken people in their clinics, but they try to fix them and raise their potential. In a way, great teachers are also like clinicians. They treat each student separately and help them realize their individual potentials. The medical profession is also very relevant because everybody is concerned about their own bodies and their medical choices. Often these choices are complex and require a fair amount of statistical and computational thinking. So we can develop computing paradigms that teach machine learning for people by using their own personal data for medical choices.

I would like to finish off this blog by talking about narratives, and about the stories we need to tell about AI. I will use two existing pop culture narratives - the movies "Star Wars" and "Lord of the Rings". I am a great fan of both these movies.

The "Return of the Jedi" from the Star Wars movies gives a very nice narrative about how a "primitive" tribe of Ewoks overcame the much greater power of imperial storm troopers. Sure, there were a few Jedi warriors who were helping them, but the Ewoks were the fundamental game changer in the battle, which is one of the most lovely aspects of the movie. In our battle over AI, we need to get the Ewoks - people who value friendship and nature more than technological gadgets.

The "Lord of the Rings" books offer another similar narrative (actually George Lucas was quite inspired by these books).  The final battle with the dark lord Sauron is won through an alliance of elves, men, dwarves, as well as hobbits and ents. I particularly like how central the hobbits and the ents are to these battles, as these are tribes that are not technologically superior, but value friendships and nature immensely. Another nice reference is to the wizards (loosely analogous to the AI researchers), who are split between the forces of the light and the dark.

Well, these are just two narratives, not quite complete, but much better than the stupid Skynet narrative about AI. The danger of the Skynet narrative is that it is fatalistic - as if humanity is like a deer caught in the headlights, unable to do anything about AI.

Robots are going to get your job.
Robots are going to make out with your boyfriend.
Robots are going to eat your babies.
Robots are going to wipe you clean.
No, Robots are going to make your breakfast.
No, Robots are going to make you immortal. 
This binary narratives about AI need to stop ! Humans are the agents of their own lives !

We also don't need help from a Jesus-like savior like in the Matrix movies (yes, hello, the title of my blog). But we need stories that help us believe in everyone.

Sunday, January 17, 2016

Anatomy of open-source education: A paraview of "Adventures in human being" by Gavin Francis

An anatomical drawing from "De Humani Corporis Fabrica" by Andreas Vesalius, published in 1543

Traveling through time in Florence : 

A thin fog hangs low on the Arno river in Florence, framing the picturesque bridge of Ponte Vecchio in a nice photographic effect.  Inhaling the chill of the early winter air, I walk by the rows of shops selling jewelry and trinkets on the bridge. As the evening sets in, the throngs of tourists are gone, along with the artists, street food vendors and the peddlers of souvenirs chasing them.There is a faint smell of wood smoke, perhaps coming from a pizza getting baked somewhere.

As I absorb this moment of tranquility, my mind is occupied by a man who lived here five and a half centuries ago - Leonardo Da Vinci. I am reading "Adventures in human being", a book by a young Scottish doctor and clinician Gavin Francis, and Leonardo is a central character in the book.  Partially autobiographical,  the book narrates the author's experiences in his medical practice and explores the idiosyncrasies of  the various organs of the human body and how they are discovered in the history of medicine. Leonardo Da Vinci - the great artist and engineer, the maker of the Mona Lisa and the flying parachute - also helped humankind discover the most fundamental of objects, their own faces. He is the first person in history to study facial expressions in a scientific manner - how the muscles stretch and deform the skin, and how human emotions are translated into smiles and frowns on the face.  As I pass by the tourists and Florentine citizens, I wonder how Leonardo would have looked at their faces. Would he have read the emotional history of their lives in one glimpse ? Would he have seen in X-Ray vision the anatomy of muscles beneath the skin ? As I wonder, I begin to suspect if Leonardo is the answer to the greatest question of history: how did our human civilization enter the modern age of scientific investigation ? And why did this happen first in Florence, Italy ?

Once the cultural capital of the world, Florence now resembles a museum opened inside out - a dissected corpse of European renaissance. It is as if the vibrations of a furious energy from several centuries past spilled the masterpieces of art onto the pavements and gateways, with museums and galleries unable to hold them within closed doors.

Museo Galileo is an unassuming building behind the famous Uffizi gallery, which is dedicated to telling the story of how modern science began. I visited this museum a few years ago and seen its fantastic historical relics. Now I play a game in my mind,  trying to recollect where behind the walls of this building they lie: the first telescopes, microscopes, astronomical charts, sundials, mechanical clocks, navigational compasses, and the first instruments for measuring electricity. Walking through those exhibits was like traveling through time, with each successive era bringing in greater standards of precision in measurement. Apart from the myriad scientific instruments, there were two sets of objects that piqued my interest then. The first were a set of demonstrators illustrating specific scientific principles - for example, the parabolic path of a projectile discovered by Galileo. The second were a set of anatomical models showing advances in medical science - for example, tools for delivering babies in complicated pregnancies. On first sight, these anatomical models showing messy biology stood awkwardly different from the clean instruments of physics. But there is a link between the two, though this fine museum fails to show it within its modest premises.  That missing link now lies in the British Library in London, in the notebooks of Leonardo Da Vinci. They demonstrate that anatomy was the first modern science and physics followed afterwards.

The great physicist Ernest Rutherford said
All science is either physics or stamp collecting. 
This disdain runs deep in our academic establishment, with physics at the top of the pecking order of departments.  Many fields try to ape physics in their methodology, aiming for overarching theories which condense the "truth" into a small set of equations. But are we doing it completely wrong ?

 In this blog, I will narrate a short history of the science of anatomy -  the original harbinger of precision in modern science. I will argue that we scientists should aim to mimic anatomy instead of physics. By that I mean, we need to pay attention to detail even without any pretence for an overarching theory. This is not mere stamp collecting, but an art of map making that is necessary to understand a complex and dynamic subject, such as the human body. I believe this is particularly important for the field of my own study - computer science, which is arguably leading us into a new scientific age. I once related on this blog a lecture by Michel Serres, where he used the word bouleverser  to describe what computers are doing to our society - destroying everything and remaking them in a brand new fashion: politics, culture, education, economy, science, everything. This radical transformation is comparable to only two preceding moments in human civilization: the inventions of writing and printing.  As we try to step into such a new scientific age, we should understand how this happened before, 500 years ago in Florence.

Awakening of the science of anatomy: 

(Left) An anatomical drawing based on "Anathomia Corporis Humani" by Mondino de Luizi, written in 1316. The drawing is from a later print in 1541. Contrast with the lack of detail as compared to Vesalius, above. (Right) The cover of the book showing practice of dissection for anatomical studies.

Browsing through wikipedia, I discovered a peculiar quirk from history that connects three great scientists. The anatomist Andreas Vesalius was born in 1514 and died in 1564. The astronomer Galileo Galilei was born in 1564 and died in 1642. The physicist Isaac Newton was born in 1642 and died in 1726. Tibetans who believe in reincarnation might very well say that the ghost of the scientific spirit successively reincarnated in these three people. The last of the three, Newton was born on 25th December 1642, which gave rise to a recurring annual joke amongst atheists during Christmas time, that "Of course, we need to celebrate the birth of Newton". Newton revolutionized physics by bringing it into the ambit of mathematics: his glorious three laws of motion. Many school children grow up thinking that this mathematical insight occurred to Newton while he was dozing off under an apple tree when lo, an apple popped on his head. Of course, this story is a myth and not true, but so is the myth that physics was the centre of modern science. To understand what motivated Newton, we need to understand Galileo and his fellow pioneers in precise astronomical observation, such as Johannes Kepler. And to understand what motivated Galileo, I believe we need to understand Vesalius and his fellow pioneers in precise anatomical observation.

By the time of Vesalius, the science of human anatomy was already mature. Crucially, it was Europe that held the advantage in anatomical knowledge as opposed to other great civilizations such as China, India or Arabia. These other civilizations stood superior to Europe in many other fields such as mathematics or astronomy. Scholars of history keep arguing why the awakening of modern empirical science, requiring precise experimental observation, arose in Europe and not in other places. There are many theories, but I believe the answer lies with anatomy, where Europe and specifically Italy, held a distinct superiority over other places. Dissecting the human body was a cultural taboo in many civilizations, as it is still today in many contexts. Ancient India had a vast knowledge in medicine, but cutting the human body was banned due to religious reasons. Instead, Indian anatomists let the body decompose naturally and peeled off the layers using Kusa grass. This was not as effective as using precise surgical instruments for studying anatomy as practised later in Europe. Ancient China had a cultural emphasis on holistic medicine that required studying the entire body, as opposed to observing each organ in detail. This resulted in a lack of a demand for anatomical knowledge. It is not clear whether Arabs performed dissections, but the strong prohibition in Islam of representing the human body in artistic form resulted in a shortcoming of anatomical knowledge.

A strong contrast to all these civilizations is the popularity in Europe of the artistic description of the human form. Sculpture and fine-arts have been valued highly by the Greeks and pursued with equal gusto by the Romans. It can be argued that Greek science and philosophy have stagnated during Roman times, but the arts of sculpture, painting and architecture have advanced immensely. As the artistic standards  rose for the description of human form, many painters and sculptors realized that they needed to understand the underlying muscular structure of the human body. However, the Roman empire prohibited the dissection of the human body, again due to religious reasons. The Greek physician Galen, working in Rome in 2nd century AD, obtained his anatomical knowledge by observing the injured gladiators. When this didn't give sufficient knowledge, he compensated for this by dissecting animal cadavers. For many centuries, his work remained the standard in Europe as further progress was stunted by the ban on dissection. A remarkable element was that the delivery of human babies was studied by dissecting sheep, so much so that amnion - the scientific word for the placental membrane  - comes from the Latin word for lamb. Obviously, this extrapolation from animal studies to human bodies resulted in several errors, which were not corrected until  the 16th century by Vesalius.

Before Vesalius could achieve his mastery of the human anatomy, the gates needed to be opened for scientific investigation of the human body by dissection . The first advantage was given by Christianity. Although it required the proper burial of the bodies of believers,  Christianity removed the sharp prohibition on dissection present in pagan Rome. The atmosphere was sufficiently relaxed by 13th century AD, when an Italian physician named Mondino de Liuzzi was able to perform public dissection of human cadavers. He wrote a text named "Anathomia Corporis Humani" which is considered to be the first true anatomical text, based on observations of the human body.

The later physicians in Italy regularly performed dissections and improved anatomical knowledge. Thus, the text of Vesalius with its detailed anatomical drawings is the outcome of a gradual evolution. The publication of this text was greatly aided by the invention of the printing press by Johannes Gutenberg (born 1398, died 1468). But the science of anatomy was already in the ascendancy by this time. Apart from physicians, sculptors and artists in Italy studied anatomy in a rigorous manner. The teacher of Leonardo Da Vinci (born 1452, died 1519) in Florence, Andrea del Vercocchio (born 1435, died 1488) was a master of human facial anatomy, as can be seen in his drawing below.

Drawing of St Jerome by Andrea del Vercocchio, the teacher of Leonardo Da Vinci

Another great master of human anatomy is the sculptor Michelangelo (born 1475, died 1564). This analysis of human anatomy flourished in Florence and spread from there to the whole of Italy and Europe.  Specifically, Leonardo Da Vinci depicted bone structure and musculature in significant detail, as shown below.

Anatomical drawings of Leonardo Da Vinci from his notebooks

Apparently, Leonardo was so curious to study facial expressions that whenever he saw particularly ugly or grotesque looking people, he followed them around the town in the hope of catching their facial expressions under different emotions. His systematic analysis of the human face is the secret behind his masterpieces - such as the Last Supper and Mona Lisa, with their precise and enigmatic facial expressions.

It is sometimes acknowledged that the year 1543, with the publication of the book De Humani Corporis Fabrica by Vesalius, is the beginning of  the scientific revolution. Another significant event in the same year is the publication of Nicolas Copernicus about the heliocentric theory of the universe (something already discovered by several ancient astronomers). However, we have to note that the work of Vesalius is not the beginning, but a significant milestone of maturity for the science of anatomy. In terms of scientific investigation and experimentation, the other sciences caught up only much later.

The scientific study of anatomy revolutionized surgery, and medicine in general. In his book "Adventures in Human Being", Gavin Francis describes the Victorian medical school building in central Edinburgh in Scotland as follows.

Carved into the stone lintel of the entrance was "SURGERY ANATOMY PRACTICE OF PHYSIC"  The greater weighting given to the word ANATOMY was a declaration that the study of the body's structure was of primary importance, and the other skills we were engaged in learning - those of surgery and practice of physic (medicine) were secondary. 

Edinburgh has a unique and significant place in the history of anatomy. The physician and artist Charles Bell was born and practised here. Francis narrates how Bell was inspired by the drawings of Da Vinci and by his analytical detail in the depiction of facial musculature. Bell described in detail the different facial nerves that animate the human facial expression. One of the medical pathologies that is studied by Bell is that of facial paralysis, now known as Bell's palsy. The complexity of the facial anatomy in the human has led Bell to proclaim that humans are unique with respect to the other animals in how complex their facial expressions are. This model was later criticized by Darwin who also studied in Edinburgh. But Darwin expressed a great fondness for the anatomical work done by Bell and by his predecessor Da Vinci, which undoubtedly influenced the later development of his theory of evolution. Even the observations of Charles Bell on the uniqueness of human facial expression remain relevant to this day, as  facial expression mirrors the complexity of emotional states in the human brain, which is substantially superior to that of the other animals. In more recent times, psychological research by Paul Ekman and others has investigated how the facial expressions in different human cultures, including traditional tribal societies, are remarkably similar to each other and exhibit similar complexity. The synthesis of virtual human characters in films and computer games derives greatly from this work in psychology and anatomy.

Another significant advance in the study of anatomy is the work by Henry Gray, whose classic "Gray's Anatomy: Descriptive and Surgical" first published in 1858, remains an important reference to this day. Many scientists working in other fields have relied on this book to extend their findings to the field of medicine. This open scientific investigation, structured on an open understanding of the human body, has greatly benefited mankind. This is arguably one of the most important gifts of science: in addition to the discoveries of evidence based medicine through randomized controlled trials, and that of antibiotics such as Penicillin. These advances in medicine have saved countless number of lives and alleviated the pain of several more people. In this sense, even from a practical and utilitarian point of view, medicine (and specifically, anatomy) is a greater role model for sciences than physics.

Open source education of anatomy: 

With the advances in body imaging technologies and computational medical sensors, we now have a unique opportunity to extend the understanding of anatomy to the general public. Unlike X-rays and CAT scans, novel imaging technologies such as Ultrasound Imaging and Magnetic Resonance Imaging (MRI and fMRI)  do not use harmful ionizing radiation. Thus, in principle, they can be used extensively for acquiring a lot more data about the human bodies than what we have today. But in practice, they remain extremely expensive and have not yet realized economies of scale. Very few people have deep knowledge of their own bodies, as captured by the latest medical imaging technology. Why is this ?

Very often, it is too late when doctors perform clinical diagnosis of harmful tumours, bone fractures or misalignments in the body. This results in costly late procedures or even in the death of patients. This is terrible, but there is a greater problem that is not apparent to people who are not aware of the possibilities of computational medicine.

It is now possible to capture and analyse the human body throughout its  metabolism, and obtain a detailed understanding of its processes of digestion, locomotion or cognitive control. Obviously, all these bodily processes depend on the exact physical structure and anatomy of the patient, as well as his lifestyle. So the medical advice can be tailor-made to the exact needs and requirements of the patient. This can be achieved by computational analysis of the sensor readings from one patient, and putting them in relation to the statistics from large human populations. All of this computational analysis needs to be grounded on the anatomy of the human body.  Another grounding factor is the DNA of the patient. If we have an open model to represent this knowledge, this can facilitate the development of personalized drugs as well as lifestyle recommendations, which can be prescribed by the doctor in coordination with experts in many different fields. These recommendations can be as detailed as the posture of the body, the cognitive tasks during the day, or how to organize one's home or furniture. This will be in stark contrast to the one-size-fits-all drugs that we currently have in the pharmaceutical market. In fact, future generations will look back at our current medical practice in dismay, similar to how we look at the ancient Roman physicians who delivered human babies on the basis of sheep anatomy.

However, medical diagnosis is not solely a computational problem and we can never replace the role of the clinician. Reading the book of Gavin Francis made me realize this very strongly. There is an underlying strand of compassion and empathy that runs throughout the book, across the various medical cases that he describes. This empathy cannot but be obtained from a dedicated clinical practice. It is the job of the clinician to explain the alternatives to the patient and convey the information in a manner that he can understand. Very often the problems are not clear cut and there is no easy solution. In this sense, the role of the clinician is not much different from that of a teacher. What we now have is a problem of educating the general public, and making them understand their own bodies in a more profound manner.

Despite the great advances in technology, our societal understanding of the human body has not progressed much from the era of Leonardo Da Vinci. We need a new model for anatomy that corresponds to the computational understanding of the human body. We need a method of communicating this model between experts and the general public, in a way similar to how printed books communicated the anatomical drawings of Vesalius. Without these models of communication, we will not be able to exploit the advances of medical imaging technologies and computational medicine. In fact, the situation is far worse. These advances will be used by the nefarious powers on the market who will exploit the gullible public similar to how livestock animals are exploited. The disaster in the food industry, which keeps churning out addictive sugar loaded products, is a case in point. Another disaster is unfolding in the industry of medical supplements and pain medicine, which make the human users addicted to them. Many people use legal drugs such as nicotine and alcohol as pain relief. But the human exploitation is not limited to food or drugs. All social and cognitive activities will be analysed with respect to how they affect the human body, and thus they will be exploited. An important avenue of exploitation, based on advances in neuroanatomy and physiology, is that of our digital lives on the internet,which now hosts a significant chunk of the economy.

How can we have an open-source education of computational anatomy for the general public ? This is not an easy question to answer. In fact, the question is deeply tied to how we communicate computational objects in general. Most people do not understand computers. Even trained computer programmers do not understand complex software. This is the case even for "open-source" software, where the source-code is made public for anyone to see. For most people, this is illegible and a foreign, alien language. If we are not able to communicate relatively simple computational objects like web pages, how can we communicate complex biological models to the public ?

We need to go back to the roots of anatomy and decipher how those artists sketched the human form and musculature. In fact, an anatomical drawing follows a complex aesthetic of visualization. When the body is dissected, the important nerves and muscles are drawn out and presented in a visually comprehensible manner. This is necessary even for medical students, and more so for patients. If we want to show the metabolism of the human body in motion, we need to develop a similar language of aesthetic. Crucially, this language should be able to represent the computational aspects of the metabolism that are relevant for diagnosis. This might seem like a daunting task, but we have a great starting point, which is that people are motivated to know about their own bodies, and they can start from the static anatomical drawings aligned to their medical images. However,  we cannot avoid the job of educating the public. In this regard, we need the effort of artists similar to Leonardo Da Vinci, who can bridge the elite world of scientists to the lay person.

We still have many cultural taboos about anatomical images. Most people associate bone skeletons and inner body images with corpses. Very few people see them as masterful creations of nature, depicting all the dynamics of life. In Germany, there is a traveling exhibition of plasticized human and animal anatomy, known as Körperwelten (Body Worlds) which keeps running foul of  religious groups. Part of the appeal to such anatomy exhibitions is their shock factor and morbid  element. However, we need to grow out of this shock and understand our human bodies in their true dynamic form. We need artists who can bridge this gap.

Anatomy of open-source education: 

The media theorist Marshall McLuhan has theorized in the 1960s that  modern media is fundamentally reshaping our human consciousness. He profoundly influenced many computer scientists who wanted to achieve a fundamental transformation in human society through computational thinking. In order to qualify as a medium for thinking, a computer should not be restricted to an elite set of programmers or engineers, but used by everybody. In other words, this should be as versatile as a book. This vision and dream was the driving force of the pioneering work in personal computing by scientists such as Seymour Papert and Alan Kay. We now have the technological means to provide personal computers for all human beings. However, the way we are using computers today is a complete disaster, and a total sell out of the original dream.

Most people use their computers (now hidden inside their mobile phones or home appliances) as livestock tethered to a pole. In other words, they are willing slaves to an overarching system of control. With repetitive use, they are conditioned to think and behave like captive animals. In this regard, computers and internet are much worse than books, and closer to chemically addictive substances. A loose analogy to the situation today is most people using books entirely to write confession material to the pastor in the church, and never for reading anything, not even the bible. If books were used like that, they would have resulted in a slave society in complete thrall to the powers in the dark. The invention of the printing press by Gutenberg would have simply exacerbated the situation. In reality, this did not happen. The fundamental reason is that the elite scientists have found a way to communicate their knowledge to the general public. This started with the science of anatomy. Today, we face a similar task in computational sciences. I believe we can draw similar inspiration from anatomy, building the first applications in health and environmental sciences, where there exists a significant minority of passionate people eager to know more. These people can be trained to think in a computational manner through applications that visualize the hidden anatomy of the human bodily processes or of the environmental processes. Much as the underlying neural and cardiovascular structure describes the health of the human body, the threads of biodiversity and natural resources describe the health of the environment,

These applications need a method for communicating computational media that can be easily  interpreted and visualized by the general public, which over the course of interaction, reveals the computational intricacies to the user. In other words, we need to decouple how the media is programmed with how it it reacts to the user. But at the same time, we need a guarantee that the user interface produces the same level of computational control as a full-fledged programming language. The work of Alan Kay, the original inspiration behind the movie Tron, remains a hallmark in this regard. Snippets from this inspiring work are now available in the programming languages of Smalltalk and Squeak. Another  example is the work of Bill Atkinson at Apple, who developed Hypercard (eulogized here by Douglas Adams). Although limited in its computational expressiveness, this pioneering work was far ahead of the World Wide Web developed by Tim Berners Lee, in terms of user expressibility. The web, in turn, is far more expressive than the disastrous world of apps we have today. In terms of computational education of the general public, we have regressed tremendously.

In today's world (in 2016),  my personal hopes for an open data format for  educating the general public to think in a computational manner are with iPython notebooks. At present, these notebooks are used extensively by researchers in computational sciences and machine learning to display scientific data sets. With a little bit of prodding, they might be developed to communicating computational data  to the general public. Taking another cue from clinical medicine, we can develop applications to help the users understand their own bodies or lifestyles in an analytic manner. It is very easy to develop apps that make money by getting the users addicted via some psychological weakness. It is much harder to develop apps that don't make as much money, but will educate the public. Without conscious effort from programmers, these will never happen.

What is a paraview : 

This blog post is inspired by my reading of the book by Gavin Francis. However, it also draws from my own personal experiences in my  life and my perspectives as a computer scientist. In this sense, it is something more than a review of the book. I am expressing my inspiration by using the equivalent of anatomy in the human language: etymology, to coin a new word.

In Greek, the prefix "para" refers to something "beyond" or "by the side of". Interestingly, this is one of the prefixes that applies equally to Greek and Sanskrit. Examples in English that use this prefix include paragraph, parallel, parapsychology, paranormal, parachute (first imagined by Da Vinci). I coin the word "paraview" to refer to a "view beyond the subject". I think this is a very useful word to describe a detailed comment enriched with one's own experience or perspective. Ideally, our expressions of the digital lives on the internet should be connected to one another as paraviews, and not organized in a hierarchical list. This imposition of hierarchy on human thought by the world wide web is one of the complaints of Ted Nelson, the originator of the ideas of hypertext and hyperviews. At present, we do not yet have a hyperview browser to inspect the anatomy of online content, as envisioned by Ted Nelson. But at the least, we can try to express paraviews, instead of mere appendages in a hierarchy.

Wednesday, November 25, 2015

The four sided negation: A tool for overcoming bias in science, atheism and democracy

Towards the end of the 12th century, the Islamist raider Bakthiyar Khilji ransacked the ancient Buddhist university of Nalanda in eastern India. Apart from the many students and monks who lived there, the monastery also housed a large library, similar in extent to the great library of Alexandria in Egypt. After he looted the wealth of the monastery and put to death many monks, Bankthiyar Khilji was asked to decide on what to do with the books in the library.

Khilji  first asked if there was a copy of the Quran in the library. The answer was negative. He then asked to burn all the books in the library. "But valuable knowledge in the books will be lost forever",  warned the caretakers of the library. Khilji famously answered as follows

"If there is a book that says anything against the teachings of the Quran, then it is blasphemous and should be burned. If there is a book that agrees with the teachings of the Quran, then it is superfluous and should also be burned. Please take care that no book is left unburned."

 The Persian historian Minhaj-i-Siraj who was accompanying the raid, recounted that the fires from the embers of the library kept blazing for several days. This action might horrify us today on account of its senselessness and violence, but there is a twisted logic to what Khilji said in defense of burning the books. An echo of this logic can be found in the pronouncement of George W. Bush after the 9-11 attacks on New York City in 2001.

You are either with us or you are against us! 

Bush did not see any shades of gray in the vast canvas of the political landscape of the world. Similarly, Khilji classified knowledge in a strict binary fashion, that which is in agreement with the Quran and that which is not in agreement. He considered Quran itself as all-encompassing in knowledge, so this strict binary classification precipitated his decision. But what was lost in the many books burnt in Nalanda was not only a vast compendium of facts and knowledge, but also a philosophical commentary on the limitations of binary classification of knowledge. Indeed, the most profound examination of these limitations was done by Nagarjuna, a Buddhist scholar who many centuries earlier served as a rector to the Nalanda university, in the third century AD. But what Nagarjuna elaborated on was an even more ancient tradition from India, tracing its roots at least three thousand years deeper into history when it was codified in the Vedas - the holy scriptures of the Hindu religion. According to this tradition, logic is not 2-valued (as commonly understood as true/false), not even 3-valued (as commonly understood to denote shades of gray), but is 4-valued. This 4-valued logic is known in Sanskrit as Chatuskoti. For a long period of human history, this logic reigned supreme in Asia. Eminent philosophers and logicians in China, Japan and southeast Asia have painstakingly translated the Buddhist scriptures into their native languages, the most cherished ones being the commentaries written by Nagarjuna.

I think it is sad that this ancient logic is lost today in practice. Due to its association with religious and spiritual discourse, this 4-valued logic is sometimes considered unscientific, but nothing can be farther from the truth. In fact, the most important applications of this logic have been in objective fields. In this blog, I will argue that considering 4 sides to a negation, instead of the usual 2 sides, is a necessary requirement for both scientific study and political discourse. Essentially, negation is the most elementary operation in language. We cannot think without using negation on different concepts and ideas. But every time we apply negation, we are introducing a strong bias into our thinking, which will eventually lead us to severe errors and paradoxes. We will examine this further, while taking some interesting detours in history.

Introduction to Catuskoti or the 4-sided negation: 

Figure 1: Venn diagram illustrating binary logic 

Let's consider a simple proposition "Cheetah is the fastest animal" and denote this by the symbol A. In regular logic, this proposition is either true or false. When we think of a proposition, we usually visualize it in the mind's eye on a straight line-segment: akin to a barometer of falsehood. On one end lies truth and on the other end lies falsehood.  When we are certain of the truth value of a proposition, this line-segment dissolves into two tiny dots, standing apart from each other, which we can call "A" and "Not A". However, this visualization is incorrect.

Instead, what we need to visualize is the  Venn diagram shown in Fig 1. When we frame a proposition, we create an island "A" in a vast sea of possibilities, known as the universe Ω. When we use strict binary logic, we associate "Not A" to everything in this Ω except the island denoted by A. Depending on how many words we use to describe A and how clearly we describe it, our knowledge of this island increases. But the vast sea surrounding this island can be still a mystery to us. For example, to elaborate on our example, a Cheetah can be described as a wild cat with a yellow fur coat and spots, that it is native to Africa and that it hunts gazelles. To go to the extreme, we may describe it by giving the entire genome of Cheetah in the code of DNA. But what exactly is "Not A", what differences to the genome of Cheetah will classify it as a Non-Cheetah ?

Figure 2: Venn diagram illustrating fuzzy or probabilistic logic 

As our knowledge about other animals increases, we can measure our understanding by creating an island of ambiguity "A and Not A" that is encroaching into the island of "A" in the Venn diagram. This island measures the ambiguity of the proposition, inherent to the finite length of description of A.  As we conduct experiments into how fast Cheetahs run as opposed to other animals, we can measure the probability of A being true. Ideally, this island of ambiguity shrinks and we get back to our binary world.

But in reality, we still have no complete knowledge of the universe Ω. What exactly is an animal ? How many other animals do we know ? What exactly do we mean by "fast" ? Can bacteria floating in rapid ocean currents be considered animals ? This is a case we have not considered when we framed the proposition A. Unlike the island of ambiguity, this uncertainty is not related to A but to the universe Ω. Specifically, it refers to the limitations of our understanding when we say "Not A", its overlap with Ω is not as complete as we imagine it to be. If we correct for this, we obtain a visualization of the 4-valued negation as follows.

Figure 3: Venn diagram illustrating Chatuskoti, or the 4-sided negation. 

This 4-sided negation creates four distinct zones in the universe Ω . With increasing knowledge,  our understanding of all these 4 zones increases, but the zone of "Neither A nor Not A" may remain forever. This is because we may predict that we don't know something about the universe, but we may not be able to express exactly what it is that we don't know.  Please note that this limitation is due to two things: (1) our understanding of the world and (2) the limited expressibility of language. Even if we allow humans to be omniscient (in whatever finite context for Ω  specified by our problem), the ambiguity in the language may never be resolved. This is a mathematical fact as well as a philosophical realization,  that is achieved very recently in Western philosophy. We can refer to the works of Bertrand Russel and Ludwig Wittgenstein, as well as the mathematician Kurt Godel. In contrast to the west, these  limitations were studied from the very beginning in Indian philosophy and logic.

 In the traditional Indian philosophical system of Vaiseshika, the universe is supposed to consist of many tiny atoms. In the Venn diagram above, this translates into Ω  being composed of many (possibly infinite) tiny discrete samples. When we use negation, it is like cutting the above Venn diagram with a knife. Our knife may not be sharp enough to cut precisely through the different atoms. Thus, the 4-sided negation is a fundamental tool in logic that acknowledges this limitation.

Unlike the island of ambiguity given by "A and Not A" which is addressed by fuzzy or probabilistic logic, the 4 zones of Catuskoti cannot be reduced to a single line segment between 0 and 1. Indeed, such a reduction is topologically impossible. What we need is a different mathematical construction. I am not aware if there is anything that fits to the bill.

I find it useful to employ an additional tool along with the traditional line-segment between 0 and 1, with which to measure probabilities. I suggest using a circle, to which the entire universe Ω  is projected. This circle can be visualized as roughly centered at the island "A". At each point on the circle, we can measure our uncertainty in predicting the world towards that direction. Unlike the probabilities, this uncertainty (ignorance) can be unlimited, but it can still be characterized in some way. For example, we can say that our ignorance is greater in certain directions than in others. The metaphor of a circle is also used in defining what a "well-rounded" education meant. Until a few decades ago, the education of a person was not considered complete until he achieved certain basic knowledge in various diverse fields. Without this fundamental knowledge on all directions, it was considered that any decisions or conclusions made by a person would be faulty. However, with increasing knowledge and diversification of scientific fields, our society has sadly reduced the impetus on well-rounded education. Though this is sad as it is, I think it is a symptom of a deeper problem in logic where the 4-sided negation was forgotten.  So where and how was this innovation in logic forgotten ?

The story of Ramon Llull and how European renaissance missed the 4-sided negation: 

Ramon Llull was a Catalan philosopher in the 13th century who was critically important for the historical development of European renaissance. He has traveled extensively and spent a significant amount of time in North Africa, conversing with Arabs in the Moorish centres of learning. During his time (roughly termed the middle ages), Europe has sunk into the depths of ignorance, with the classical discoveries of Greece and Rome forgotten. In contrast, Arab civilization was in full flourish, where the Arab mathematicians and astronomers have compiled and built upon the knowledge from multiple lands ranging from Greece to India and China. It was the contact with Arabs, in southern Spain and Italy, that sparked the fire of European scholarship. Ramon Llull was a critical piece of this puzzle.

Arab astronomers were interested not only in mapping the course of the stars and the planets, but also in understanding how this connects to the human psychology. These esoteric sciences of astrology, alchemy and parapsychology are shunned today as superstitions, but in earlier periods, all scientific investigation was done at the behest of such esoteric subjects. The fundamental goal of these scientists was to liberate human condition from suffering and achieve harmony with the universe, the objective study of natural phenomena was only a means to this end. The harbinger of modern science, Issac Newton, was himself a practitioner of alchemy. With Ramon Llull, these lines were blurred even further. The Arab astrologers were developing upon the ancient science of Indian astronomy and astrology (known as jyothisha). In ancient Indian mythology, the circle denoted the universe and the square denoted the human understanding of this universe. The Vedic sacrifice required mathematical understanding of how to convert a circle into a square. These esoteric ideas spurred a lot of mathematical development - arithmetic, algebra, trigonometric functions, infinite series and so on. Many of these ideas reached Europe via an Arab translation of an Indian text called  Brahmaguptasiddhanta, (translated into Arabic as Zij-al-Sindhind by the astronomer Muhammad al-Fazari). The Arab astrologers codified these ideas into a mechanical system that makes the right predictions irrespective of who is using the system. In one form, this mechanical system became a device known as a Zairja - the first human imagination of an automaton.

Ramon Llull was deeply inspired by the Zairja and created an esoteric art of symbolism that he termed Ars Magna. This was supposed to represent all the aspects of the universe, as well as all the aspects of God. By representing all the possible aspects of knowledge in a cartographic manner, he created the first representation of a "well-rounded" knowledge. He wanted to develop a mathematical automaton for debating with the Muslims, in order to win them over and convert them to Christianity. The legend goes that he was pelted with stones in Tunis when he announced that his Ars Magna could not be understood unless the person  accepted Christ as the savior. After his death, Llull was beatified by the church, but his most sincere disciple was burnt at stake. This was Giardano Bruno, who developed upon Ars Magna to denote all the cosmological concepts and who had come to the conclusion that sun was at the centre of the universe (and not the earth, as mentioned in the bible). Galileo escaped a similar fate from the church by recanting his sayings, although he could see the motion of the planets in his telescopes. This uneasy relationship with astrological and esoteric sciences persisted in all Abrahamic faiths, with practitioners often being accused of worshipping the devil. This is inevitable, because these sciences stemmed from pagan cultures like Greece and India.

The influence of Llull extended beyond astronomy. Leibnitz made his dissertation about Llull's art, and this greatly influenced his future pursuits: calculus and logicism in philosophy. He called this Ars Combinatoria.  Along with Newton, Leibnitz was the fundamental mover of the European renaissance in mathematics and physics. Many mathematicians tried to follow in Leibnitz's footsteps, to ultimately realize a consistent and mechanical system that can predict everything. The latest expression of this dream was by Hilbert, who tried to reduce all mathematics to a set of axioms. However, this adventure was cut short by Godel, who proved that inconsistencies inevitably will creep up in any system of axioms if it tries to be complete (describe everything). This crazy pursuit in history has been described aptly by the computer scientist Gregory Chaitin (who along with Vladimir Kolmogorov is reckoned as the founder of the field of algorithmic information theory). To this day, the field of knowledge representation in computer systems, ranging from databases to the internet, is indebted to Ramon Llull and the Zairja of Arab astrologers. The amazing thing is that this historical line of development has completely missed the Catuskoti or the 4-sided negation in logic. It is a mystery why this has not reached Arab scholarship from India, and from there to Europe. It is unlikely that all the knowledge disappeared with the burning of the Nalanda library.

If Llull was aware of the Indian tradition of logic, he wouldn't have expressed that the people of Tunis needed to convert to Christianity to understand his Ars Magna.  He would know that Ars Magna will forever be a work in progress, and forever be deficient of its goals. This was ultimately expressed in late 1800s by Emil-du-Bois Raymond as "Ignoramus et ignoramibus", and at that time, he was soundly ridiculed for saying this.

The history of western logic had another brief brush and fascination with India. The English mathematician George Boole was deeply inspired by Indian philsophy. His wife Mary Everest Boole, a significant mathematician of her own right, was the bridge to India. Her uncle Everest was a great adventurer in India and gave the eponymous mountain its name. Mary Boole also was in correspondence with Jagadish Chandra Bose, who was a noted Indian scientist at that time. She wanted him to study the efforts of her husband, but this letter was left unanswered by Bose. George Boole developed an elaborate system of logical inference by using the principles of algebra, which he wanted to apply for esoteric ends (salvation, transcendence etc.) Nobody at that time believed that this system would have any practical use. This changed dramatically when Claude Shannon used Boolean logic as the basis for his mathematical theory of information. It is an open question if the development of information theory would have taken a different course if Boole incorporated the 4-sided negation into his logic. However, this did not happen and computer systems today are mostly restricted to 2-sided or 3-sided logic (Infinite-valued logic, like fuzzy systems, are actually extensions of 3-sided logic as illustrated in Figure 2).

Philosophy of Nagarjuna 

In philosophy, western philosophers starting from Kant, have expressed doubts about the limitations of spoken knowledge. Russel and Wittgenstein have systematically characterized the various dilemmas and paradoxes that arise from these limitations of expression. In Indian philosophy, this awareness of these limitations was already fundamental due to the Catuskoti logic. The onus in Indian philosophy was not about finding these limitations, but about achieving spiritual transcendence by overcoming these limitations, even if it is not possible to express them in spoken language.

Historically, Indian philosophy was divided into 9 streams or viewpoints (Darshana in Sanskrit). Of these 9 streams, 6 are in support of the existence of a conscious self (Samkhya, Nyaya, Vaiseshika, Yoga, Mimamsa, Vedanta) and 3 are in opposition to the existence of self (Buddhism, Jainism, Charvaka philosophy). But all these systems germinate from the same root and stem, which is the Samkhya tradition, which provides the mythological imagery and symbols to argue different concepts. I have earlier written an introduction on this blog to the Samkhya system (meaning enumeration), clothing traditional philsophical terms in a modern garb of vocabulary, so that we can better understand them.  Mythological images can better express notions of paradox than linguistic symbols and logic. So many Indian religions have explored this route, developing a plethora of symbols and stories. The idea is that transcendence could be achieved through devotion, or love, or following the right action, without the necessity of knowledge expressible in the spoken form.

However, the path of knowledge was not altogether absent. Philosophical systems using logical inference and deduction were developed first by the Nyaya and Vaiseshika traditions, which are based on the Catuskoti. In this way, knowing these traditions (or viewpoints) was considered essential for greater philosophical study in other systems. The first significant challenge to these systems was given by Gautama Buddha. The Jain tradition also has an elaborately developed logic, criticizing and thus extending the earlier Nyaya systems. Probably, the most extensive development of logical inference for transcendental goals was done by the Buddhist philosopher Nagarjuna in the 3rd century AD. The Australian philosopher Graham Priest, who specializes in dialethist logic (or para-consistent logic) wrote a great introduction to Nagarjuna's philosophy, from a western perspective. He believes that the vision of Nagarjuna was greater than even Wittegenestein and Russel, as he observed greater paradoxes.

I will try to give a short introduction to Nagarjuna, but before we go there, we should know that he remains a profound influence in many Buddhist countries. His Madhyamika tradition (or middle-way) has deeply influenced the development in China and Japan. For example, when we speak of  Japanese koans (or riddles), they stem from a tradition that goes back to Nagarjuna. To get a more thorough overview of this philosophy, one should consult these living traditions. What I can give in this blog is a very brief glimpse. In India, the influence of Nagarjuna had been profound on Shankara, the philosopher who popularized the Advaita tradition in Hinduism. It is Shankara who is probably the closest link to existing religious practices in India and its ancient philosophic tradition. I believe that without understanding Nagarjuna we cannot decipher that link. I am not capable, in any measure, to do undo these mysteries. But I think I can give a hint.

Recall that we represented the Catuskoti using a Venn diagram with four zones, for studying an objective question such as "Is Cheetah the fastest animal ?". But any such objective study is limited to the very ground on which we observe the different phenomena. In other words, we are restricted to the 2D plane of the Venn diagram, although the real universe may have many more dimensions. The computer scientist Alan Kay once gave a good analogy to this situation. We can imagine that we are in the crevice of a deep gorge, like the grand canyon in USA, which is shaped by several eons of erosion and landscaping. By standing in this gorge, we can observe the many layers of sedimentary rock that reflect the movements of history. But our vision will be restricted by these rocks in the gorge. Very rarely we take a fleeting glimpse at the sky and wonder what may lie beyond. Alan Kay calls such a glimpse a "Kerpow", a punch that pushes oneself beyond the dimensions of the world they are living in. In Buddhist and Advaita philosophy, salvation or transcendence is considered to be obtained  by a series of such "kerpows".

Firstly, we do not even know the topology of the 2D plane of reality that we showed in the Venn diagram. Buddhist and Hindu philosophy considers this as "Samsara" - a cyclical world where we keep retracing the same steps. I think the Dutch artist Escher gave a brilliant visualization of such a world, with ants walking along a Möbius strip.

Figure 4: Ants walking around a Möbius strip - illustrating the Buddhist Samsara 

Nagarjuna considered the problem of logic in such a world. A rationalist person is like an ant walking around this Möbius strip of logical constructs (zones of Catuskoti). In order to escape from this restrictive world, Nagarjuna developed a 9-valued logic. The first four zones correspond to one-side of the plane, as illustrated in Figure 3. The next four zones correspond to the other side of the plane, an inversion of the previous four zones. Nagarjuna said that an inquisitive and philosophical mind will not be satisifed with either of these 8 zones and seek a 9th zone, which is an escape from the Möbius strip, into the three-dimensional space. Obviously, this is a procedure that has to be repeated several times, as we approach concepts of greater logical ambiguity. In Buddhist mythology, this is akin to climbing up the Meru mountain at the top of which lies salvation or Nirvana.

A very similar rhetorical exercise in negation is undertaken in the religion of Advaita. A follower of this tradition performs negation in a series of steps called "Neti Neti" ("not this, not that" in Sanskrit).

As we can see, Buddhist and Hindu philosophy graduated from 4-sided logic that is suitable for objective analysis to more complex 5-sided or 9-sided logics suitable for spiritual investigation. However, this gave a bad image for 4-sided logic, which got confused with esoteric and spiritual agenda. I believe a proper application of 4-sided negation to objective scientific fields will give fruitful dividends, especially in situations where our ignorance of the field is immense but predictable.

How to overcome bias in atheism and democracy: 

Bias in science is dangerous, but bias in politics has immediate consequences for humankind. Even if people don't have a thirst for knowledge, they need to survive through their daily struggles. In this way,  if our political discourse has serious limitations related to the logic we deploy, we will face the trauma of misunderstanding and violence.  Unfortunately, the majority of people who have active political opinions today use 2-sided or 3-sided logic in their heads. That means, they do not understand the 4th zone (neither yes nor no) that is reflective of their relative ignorance of the subject.

A faulty language in debate is like a bad posture of the human body - when used repetitively, it results in pain. A great analogy is with the case of vertigo that happens due to tiny calcium crystals that accumulate in the inner ear, destabilizing the balance of proprioception. People who suffer from this vertigo feel that their world is constantly spinning. Needless to say, this is a nightmare condition. A cure for this vertigo has been found very recently by an American doctor named John Epley. Using a 3D cast of the human ear's anatomy, he devised a series of motions (called the Epley maneuver) that moves these particles from the troublesome spot in the ear to a non-bothersome spot. The relief felt by the patient is dramatic and immediate, almost leading them to suspect of black magic. I think that heated political debates will benefit from the 4-sided negation in a similar manner.  By understanding how our language and words are framed, and how limited they are in expressing certain ideas, we can overcome our ignorance and conceit.

Of all the debates in politics, I think nothing is as absurd as the debate on religious freedom.  This is because nobody wants to really understand the other side in a religious debate. The famous (notorious) atheist Richard Dawkins recently put this as "If I want to repudiate Leprechauns, I should not be expected to understand Leprechaun ideology". This laziness is understandable. However, what is not excusable is the reluctance to see one's own ignorance. By using 4-sided logic, instead of the more intuitive but faulty 2-sided logic, we will develop a habit of systematically understanding where our ignorance lies.

In France, there is currently a heated debate about the limits of religion. Owing to its historical revolutionary spirit against the catholic church and monarchy, French political life is very secular. However, this secularism comes from the pick-axes and shovel-fights of the 17th century, where all the debates were fought using the 2-sided negation. The secularist in France is highly confident in himself about how right he is. In this way, he is not much different from a rabid religious fundamentalist. The problem here is not with the ideology, but with the 2-sided negation.

Wise people of many cultures, including religious people, have learnt to be conscious of the limitations of their knowledge. In India, where many cultures and religions intermingled in history, people have high tolerance for  ambiguity. A person may be Hindu, but pray in a church, or seek blessings from a Muslim saint. A musician may be a very deeply religious Muslim, but still offer prayer to Saraswati (the Hindu goddess of learning). A good example was Ustad Bismillah Khan, a gifted Shehnai player. There were thousands of examples of Hindu and Muslim saints who cherished the company and customs of other religions. My favorite example was Kabir - a devoted Muslim poet who sang praises of Rama. The evidence of this cosmopolitan culture in India is present in all religions, including Christianity and Islam. All these religions are integral parts of the rich tapestry of Indian culture today.

There has recently been an opinion piece in New York Times, comparing the state of secularism in India and France. I find this essay to be a nice illustration of the problems with a 2-sided secularist outlook. France had an awful history of religious discrimination against Jews, Protestants and now against Muslims. By comparison, the history of India is a miracle, with very few incidents of religious violence for such a large population. In my opinion, the reason why India achieved this is not because of greater scientific knowledge or better legal rights, but because of a cultural assimilation of the 4-sided negation, which allows for greater tolerance in debate. In contrast, due to its reliance on 2-sided black and white logic, France is at a greater danger of discriminating against its own citizens.

However, India is not an idyll of peace. Its traditional tolerance is today under great threat. The urge for purity and austerity, as made notorious by the Islamist fundamentalists, is encroaching every religion. For example, in India, the Hindutva brigade is creating a cartoon version of Hinduism where the 4-sided logic is truncated into a 2-sided one. This desire for bifurcating everything in this world into two distinct boxes of good and bad, or pure and evil, is a self-destroying engine. Happiness and salvation in this world lie in a tolerance for ambiguity and in a consciousness of our limitations. This is as true of atheists as of religious people, of whatever religious stripes.

The works of Shakespeare are considered a turning point in western literature, where the internal dilemmas and mental confusion of characters are shared directly on stage. A human being is thus presented as a process in the making, and not as a finished product. The most famous example is the soliloquy of Hamlet "To be, or not to be".  This introspection of our mental processes is essential for us to be a better person. However, the story of Hamlet  was a tragic one. Hamlet succumbs to the guilt of murder. However, if Shakespeare used the 4-sided negation in the soliloquy of Hamlet, following the lead of Nagarjuna, he would have asked

"To be, or not to be"
"Or to be and not to be"
"Or neither to be nor not to be"
"Is there still a question ?"

A dramatic gesture would be to make Hamlet nod his head using the Indian head wobble. I will end this blog with this funny note. I believe that Nagarjuna's philosophy is responsible for this gesture, that stumps many foreigners who visit India, who cannot understand whether their Indian friend is saying yes or no. For greater clarification, they may ask,

"Do you mean yes ?" 
"Or no ?"
"Or yes and no ?"
"Or neither yes nor no ?"

"Or did you just reach nirvana ?" (This is an extremely rare case) 

Addendum: Representation of Chatuskoti using Predicate logic

After discussing with Joscha Bach ( on Twitter, I realized that Chatuskoti can be implemented in predicate logic using 3 predicates: A, Unknown(A) and Unknown(Not A). When these 3 predicates are treated as binary variables, they give rise to 8 states. More interesting things can be represented when these predicates are treated as continuous variables, able to take any value between [0,1]. The space spanned by these 3 predicates can be visualized as a cube standing on 3 axes. I propose to call this Gotama's cube (referring to the ancient Indian logician who developed Nyaya Sutras). The 8 states appear as corners of this cube, which can be enumerated as follows: 

0) Unknown Not A, Unknown A, Not A
1) Unknown Not A, Unknown A, A
2) Unknown Not A, Known A, Not A
3) Unknown Not A, Known A, A 
4) Known Not A, Unknown A, Not A 
5) Known Not A, Unknown A, A 
6) Known Not A, Known A, Not A 
7) Known Not A, Known A, A 

Of these 8 states, the states (4,5) are indistinguishable. So for practical purposes, they can be considered as 1 state, marked as "Unknown A". Similarly, the states (0,1,2,3) are indistinguishable. So they can be combined into a single state marked "Unknown Not A". With these 4 states, we get back to our traditional representation in the Chatuskoti. The clubbed state "Unknown A" is equivalent to "A and Not A". The clubbed state "Unknown Not A" is equivalent to the state "Neither A nor Not A". 

Why do we need 8 states in predicate logic and not 4 to represent Chatuskoti ? 

This is because Chatuskoti states clearly that only 1, and not any more, of the 4 states can be true: A, Not A, A and Not A, Neither A nor Not A. 

Because of this, we cannot map either of the states (A and Not A) or (Neither A nor Not A) to the predicate "Unknown (A)". This is because the predicate Unknown(A) can be true at the same time as A or Not A. This is the natural combination of predicates in predicate logic. However this is strictly forbidden in Chatuskoti. So the traditional method of handling uncertainty in predicate logic, using the predicate Unknown(A), which by the way is different from fuzzy or probabilistic logic without any additional predicate, is still insufficient to map the space of Chatuskoti. 

However, it is possible to cover all the 4 zones of Chatuskoti with 8 states, given by 3 predicates, by allowing for some redundancy. Please note that this is not the most compact representation of Chatuskoti. In theory, you can have just 2 predicates to map these 4 states. However, this predicate is not the usual "Unknown (A)". 

I am not sure if many people use 3 predicates, specifically an additional predicate "Unknown (Not A)" apart from "Unknown (A)" in computer-based logic and knowledge representation. This is a fairly simple method to adopt, and will potentially give rise to more stable knowledge representations. If you are a researcher in knowledge representation or machine learning, please go ahead and try it. All you need to do is modify your traditional error bars with 2 confidence values instead of one confidence value. :) 

You can also visualize the uncertainty on Gotama's cube : Using the HSV representation, map the hue value to the axis (A-Not A), map the saturation value to Unknown(A), and map the intensity value to Unknown (Not A). The interesting thing with this visualization is that it exposes the aspect that without saturation, you cannot distinguish hue. And without intensity, you can distinguish neither saturation nor hue. But in lighter shades, the colours become better visible. 

Do let me know if you do any such experiments. I will be happy to report them in the comments here.