Monday, January 25, 2016

Homebrew AI Club



Last Saturday, I had a great chat on Skype with Samim and Roelof - two very cool guys who are planning a sort of machine learning revolution. I met them on Twitter, where I came across their   "Ethical Machines" podcast. This is a remarkable podcast that I highly recommend to anybody who is interested in AI, computers or the future of society.  Unlike the regular commentary on mass media, whose portrayal of AI is often a one-dimensional caricature of the question "How far are we from the Skynet in Terminator ?", this podcast gives a nuanced understanding of everything about AI. This is because Samim and Roelof are both active programmers and researchers, who follow and shape the most recent trends in AI. But this is not just a technical podcast. It is also about the ethics and politics of AI, as well as about art and culture.

I will give a small profile of Samim and Roelof. They created Gitxiv, which is a nice mashup of Arxiv and Github - two places on the internet where scientific researchers post preprints of scientific articles and the source-code of the projects. Gitxiv is a place of combining these two and more. As compared to various scientific communities, the machine learning community is remarkably open and sharing, with even big companies making the source-code of their projects available for the general public. Gitxiv is a place to bring all these energies together. They are soon planning to extend this venue to share data sets - an important element for reproducing scientific results today.  I feel Samim is an artist at heart, he is interested in AI because it opens a new frontier in creative exploration. He is also a sharp thinker and entrepreneur. He is based in Berlin, which is probably the most interesting place in the world for a person with this combination of qualities. Roelof is a Ph.D student in Sweden, working in the area of natural language processing. But he is also a political activist with a  strong understanding of social activism. In my past, I have briefly participated in activism about free software, though this is nothing to write home about. But I understand how incredibly enriching it is to do the basic ground work in social activism and to engage with people. So Roelof has some great perspectives out of his experiences.

Together, Samim and Roelof are a great team and they both share a passion in democratizing AI technology to the masses. I ran into them through making some snarky comments on Twitter, with my typical pessimism about the future of AI. Especially over the past one year, I grew very critical. I wrote many critical articles in this blog about the progress of AI and about the increasing despondency of our society's future with it. This is rather depressing, as I consider myself to be an AI researcher at my heart. There is a classic trope in PhD-Comics about the "bitter post-doc" - I probably fit this bill very well. When I speak with younger researchers and students, I have to consciously work on what I say so as to not depress them completely.  But my pessimism about AI has less to do with technology, and more to do with the single-minded "Skynet" narrative that our society is building for it.

Speaking with Samim and Roelof literally lifted my spirits up, at least for a brief while. Samim is a very optimistic guy who sees the silver lining under any cloud. Right in the beginning of our chat, he pointed out that we need more narratives on how we tell the AI story. He pointed out that  there are already several positive ones - the ecological perspective, the global consciousness perspective, the young entrepreneurship perspective. Even with my bitter pessimism, I couldn't deny that ! Our chat then became about how we can build on these narratives. We remembered the glorious days of the personal computer revolution in the 1960s and 70s - in some way, we are all fans of the great pioneers like Doug Engelbart, Alan Kay and so on.  At the same time, we are conscious of the great consolidation going on in the market, where huge data clusters are condensing under the rule of monopolies. I talked about the consolidation going on in the visual effects and creative industry, damping some of Samim's hopes that this might be a way out. But Samim is too optimistic to ponder on my negatives. Roelof added a very interesting point - about how consolidation is going on in the sphere of academia and universities. He pointed out the creation of the University of Amsterdam - merging two historically separate universities (religious and secular) into one, and then the creation of commercial research centers within the premises of the university, where students have to sign an NDA before stepping in. This is shocking news to me (well not really, what did I expect !?), but we bemoaned how the Netherlands - which historically had a liberal tradition, that saved European culture from death during the aftermath of the printing press revolution - is no longer as much a  defender of the free culture as it used to be.

We then talked about how, or if it is even possible, to replicate the personal computer revolution in the sphere of AI and machine learning. What we need desperately today is a "Homebrew Computer Club" - the rag tag band of losers, programmers and nerds, which in the 1970s took on the grand big monopolies of Xerox and IBM. I mentioned that this has to be a popular movement, not limited to the elite set of programmers and researchers, but inclusive of all sections of the society: the young kids at school, old people, and especially, artists and the creators of culture. We need to get those guys and girls who define the "quintessence of cool". AI and machine learning is something everybody should claim ownership on. But how ?

A few weeks ago, I visited the Paris Machine Learning Meetup - hosted by the brilliant blogger Igor Carron and his co-conspirator Frank Bardol. I talked about virtual faces and Leonardo da Vinci - as  random a mix up of ideas as you can imagine. I have been thinking of going to this meetup for a long time, mostly because I liked its logo, which I show below.

 This logo talks about giving power to the data, but it is really using a revolutionary image of "giving power to the people".  This reminded me of the stories from the early days of computing, where Ted Nelson published his legendary book "Computer Lib / Dream Machines", which also uses the same iconography.


This crazy book used to be the bible for the Homebrew Computer Club. I haven't seen this book, but finding and owning a copy of this is one of my life's missions. This book kickstarted the idea of democratizing computer technology to the masses, much before anybody has seen or heard of a personal computer. What we need today is a seller of such dreams about machine learning for the masses. This includes machine learning for your grandma, your dog, and your street artist. We cannot afford to bind machine learning in the prison of researchers, elite programmers and mathematicians. The only hope to save our society from descending into a totalitarian state is by democratizing AI. But how to do that ?

The meetups sprouting in all major cities of the world show a path forward. They widen the audience from researchers, engineers and managers of fat-wallated companies, to something broader. But there is a long way to go before we get your grandma, your dog and your street artist to get interested in machine learning. Alan Kay thought of computer programming as a medium, something as simple to use as a book. But we have not got there yet. Even bonafide computer programmers don't think of programming as a medium, they think of it as a skill to show off. In reality, programming should be as trivial to do as speaking a language: obviously requiring some training, but something that one can do without conscious effort. We have a long way to go till we get there.

Even though I grew up as a quintessential nerd and studied computer science in various universities, I never understood what computers were all about, until I came across this lecture by Michel Serres in 2007. The French research institution INRIA, where I was working as a doctoral student, was celebrating its 40 years and invited the philsopher Michel Serres to be the keynote speaker of the function. His talk was about a point that is so simple that it blew my mind away: computers are not tools for solving problems, but tools for solving people, who in turn will solve the problems. In other words, all the fantastic applications of computers and the internet are just a side-show to something much bigger: like tiny ripples of water on a tsunami. Very few people understand the real impacts of the computing revolution, because they need to imagine this from a perspective of a "changed brain" or a "new self", not from the present self. But there is a catch - before the computing revolution can catch on and make its true impact, it has to engage with the vast majority of the society. It should not be limited to just the elite few programmers and researchers.

It is not Michel Serres who first articulated this vision, but Marshall McLuhan. This greatly inspired early pioneers  like Engelbart  and Kay. An even early expanse of this vision is from Vannevar Bush, who wrote the essay "As we may think" at the end of the second world war.  In physical terms, a computational way of thinking would rewire the human brain, expanding the higher cognitive functions, as well as those dealing with compassion and empathy. In other words, we will become a better species through the practice of computational thinking. I think the early dreams of the computing pioneers have largely failed. We stand  today in a desolate moonscape of parched desert, where the vast majority of human population live in a prison of apps and trivial status updates - what Alan Kay once reminisced as Henry Thoreau talking about the implications of the transatlantic telephone cable, "that it will help the vast majority of Americans to know about the latest fashion statement of a European princeling". This pessimism about human nature aside, I think there is a fundamental reason why the early dreams of computing pioneers have failed - the lack of useful applications to engage the user to the full potential of computing. I think this is now beginning to change with machine learning.

In the early days of personal computing, Alan Kay and colleagues have made little children draw on the computer screens and play music, and used this as a basis for establishing the principles of computer programming. As a child drew a picture on the equivalent of a "MS Paint" like program, they romanticized that the child was "programming". By the way, "MS Paint" (And Apple Paint, or any of the other clone demos) is a trivialized corruption of the original ideas behind this demo at Xerox Parc, which indeed had educational value to teach many computing aspects. But despite the best efforts of the pioneers, they did not succeed in inculcating a knowledge (and love) of programming in the masses. Today, we might be in a much better position, because we can rely on large data sets, sensors and machine learning source-code to realize far more engaging applications.

So the time for a "Homebrew AI club" is ripe. A new culture of computing can start today, where every human can be an active participant. This can take full advantage of the connecting power of the internet. But what is stopping it ? I don't know. But it may be time to put off our conspiratorial hats and believe in the full potential of the human species. It is time to get rid of the fears of the NSA, the big brother state, the lousy social networks and see the bigger picture.

I don't know how and when this massive social change will happen. But after talking with Samim and Roelof, and generally brewing some thoughts in my puny head, I have a list of points to ponder on.

Ten commandments about AI (actually, just ten talking points) : 

1) The word "homebrew" is brilliant. It is reminiscent of home-brew alcohol, which has a direct benefit on the human user, and which immediately alters the mental states, which is exactly what we need to aim for.

2) We need artists. Anything big like a social revolution will not happen due to a bunch of nerds talking about mathematical equations. We need big mojo people like Che Guvera or Steve Jobs (though less arrogance and ass-holish behavior would be nice). Heck, we need women. It is high time there are more women in computing. Most of all, we need to engage with people who have a creative spark.

3) All of us nerds need to start at home and explain technology to our families, girlfriends and boyfriends. The first thing to start explaining is probably data security and privacy. There are a monumental number of losers on the web who share their private data without even knowing about it. We should first subtract our friends and families from this group.

4) But privacy awareness and computing knowledge are not one and the same thing. There will never be a magic switch that will bring us to a privacy-respecting world. Living in the digital world will always be a battle against adversarial powers, which in the future, will only become more powerful and obscure though the use of data and machine learning. It is highly important that everybody knows how to keep track of their own data and use it for the better. But how do we train n00bs in this ?

5) An important obligation is with helping elderly people. Most elderly people are already clueless with technology - fiddling with the inner lives of TV remotes and email preferences is not for them. But everybody has a right to lead a dignified life online. Before we snark and snigger on the troubles of elderly people with technology, let's imagine how much more awful our own lives will be when we grow older. Technology will screw us million times over then (that is, if we are still alive then, and did not all disappear in the smoke of a nuclear explosion).

6) The first step about doing machine learning is collecting data. What better place to start than collecting data about oneself: one's own friend's circle, one's shopping habits, one's tax bills, one's entertainment preferences etc.. We need open-source software that helps people to collect data and organize it in a nice manner.

7) The second step is to train people to use machine learning: simple regression functions, then more complicated methods like deep-learning.. It is likely that not everybody will understand the maths behind it. But this is not as important as being able to use these methods in a regular and confident
manner -similar to how one uses home appliances everyday. I still remember the awful day when I, as a fresh graduate student in USA, put a tin-foiled sandwich in the microwave and saw it explode in fireworks. To make it all worse, this happened in front of a bunch of school kids, that I was supposed to teach about robotics ! In my entire life until then, as an engineering student in India, I have never used a microwave oven. If you grilled me about what would happen if you put metal in a microwave, I would have pondered over my physics knowledge and answered correctly that it would explode. But when I was just hungry and wanted to eat my sandwich, physics was the last thing on my head. So I completely failed as a functional user of home appliances in the USA, due to my complete lack of training. With respect to using machine learning, I think we need to train people to be first functional users of technology. Obviously, a few will cross over into learning the maths behind it. But  even if they don't do that, it is still quite okay.

8) The third step is to expand this training to a full programming language. I think most people get it wrong with teaching programming languages. Most humans don't care about the Turing-completeness of a language or a programming paradigm. They just want to get shit done. But knowing about the basics of computability and information theory will be a must for anybody in the future. So we need to train them how to think computationally - what are the types of algorithms, how do we store data structures, how do we evaluate computational cost etc. We need ways to explain this in a simple manner to everybody. We can do this. After all, driving a car is not trivial. Putting a sandwich in a microwave is also not trivial (as I found out). But with a little bit of training, people do both these tasks quite well.

9) We need a way to swallow our nerd pride. It is not easy. We all have to work on it. When you are superior to your peers in some skill that is essential for anybody's survival, you can very often feel smug about your superiority. I don't know if it is humanly possible to not think otherwise. But being a smug monkey is not the point. Evolving from monkeys to humans is our goal.

10) We need to enlist doctors and the medical profession for our help. Of all the various disciplines, I think clinicians have the most respectful view of human potential. Every day, they see broken people in their clinics, but they try to fix them and raise their potential. In a way, great teachers are also like clinicians. They treat each student separately and help them realize their individual potentials. The medical profession is also very relevant because everybody is concerned about their own bodies and their medical choices. Often these choices are complex and require a fair amount of statistical and computational thinking. So we can develop computing paradigms that teach machine learning for people by using their own personal data for medical choices.

I would like to finish off this blog by talking about narratives, and about the stories we need to tell about AI. I will use two existing pop culture narratives - the movies "Star Wars" and "Lord of the Rings". I am a great fan of both these movies.

The "Return of the Jedi" from the Star Wars movies gives a very nice narrative about how a "primitive" tribe of Ewoks overcame the much greater power of imperial storm troopers. Sure, there were a few Jedi warriors who were helping them, but the Ewoks were the fundamental game changer in the battle, which is one of the most lovely aspects of the movie. In our battle over AI, we need to get the Ewoks - people who value friendship and nature more than technological gadgets.

The "Lord of the Rings" books offer another similar narrative (actually George Lucas was quite inspired by these books).  The final battle with the dark lord Sauron is won through an alliance of elves, men, dwarves, as well as hobbits and ents. I particularly like how central the hobbits and the ents are to these battles, as these are tribes that are not technologically superior, but value friendships and nature immensely. Another nice reference is to the wizards (loosely analogous to the AI researchers), who are split between the forces of the light and the dark.

Well, these are just two narratives, not quite complete, but much better than the stupid Skynet narrative about AI. The danger of the Skynet narrative is that it is fatalistic - as if humanity is like a deer caught in the headlights, unable to do anything about AI.

Robots are going to get your job.
Robots are going to make out with your boyfriend.
Robots are going to eat your babies.
Robots are going to wipe you clean.
No, Robots are going to make your breakfast.
No, Robots are going to make you immortal. 
This binary narratives about AI need to stop ! Humans are the agents of their own lives !

We also don't need help from a Jesus-like savior like in the Matrix movies (yes, hello, the title of my blog). But we need stories that help us believe in everyone.


Sunday, January 17, 2016

Anatomy of open-source education: A paraview of "Adventures in human being" by Gavin Francis


An anatomical drawing from "De Humani Corporis Fabrica" by Andreas Vesalius, published in 1543

Traveling through time in Florence : 

A thin fog hangs low on the Arno river in Florence, framing the picturesque bridge of Ponte Vecchio in a nice photographic effect.  Inhaling the chill of the early winter air, I walk by the rows of shops selling jewelry and trinkets on the bridge. As the evening sets in, the throngs of tourists are gone, along with the artists, street food vendors and the peddlers of souvenirs chasing them.There is a faint smell of wood smoke, perhaps coming from a pizza getting baked somewhere.

As I absorb this moment of tranquility, my mind is occupied by a man who lived here five and a half centuries ago - Leonardo Da Vinci. I am reading "Adventures in human being", a book by a young Scottish doctor and clinician Gavin Francis, and Leonardo is a central character in the book.  Partially autobiographical,  the book narrates the author's experiences in his medical practice and explores the idiosyncrasies of  the various organs of the human body and how they are discovered in the history of medicine. Leonardo Da Vinci - the great artist and engineer, the maker of the Mona Lisa and the flying parachute - also helped humankind discover the most fundamental of objects, their own faces. He is the first person in history to study facial expressions in a scientific manner - how the muscles stretch and deform the skin, and how human emotions are translated into smiles and frowns on the face.  As I pass by the tourists and Florentine citizens, I wonder how Leonardo would have looked at their faces. Would he have read the emotional history of their lives in one glimpse ? Would he have seen in X-Ray vision the anatomy of muscles beneath the skin ? As I wonder, I begin to suspect if Leonardo is the answer to the greatest question of history: how did our human civilization enter the modern age of scientific investigation ? And why did this happen first in Florence, Italy ?

Once the cultural capital of the world, Florence now resembles a museum opened inside out - a dissected corpse of European renaissance. It is as if the vibrations of a furious energy from several centuries past spilled the masterpieces of art onto the pavements and gateways, with museums and galleries unable to hold them within closed doors.

Museo Galileo is an unassuming building behind the famous Uffizi gallery, which is dedicated to telling the story of how modern science began. I visited this museum a few years ago and saw its fantastic historical relics. Now I play a game in my mind,  trying to recollect where behind the walls of this building they lie: the first telescopes, microscopes, astronomical charts, sundials, mechanical clocks, navigational compasses, and the first instruments for measuring electricity. Walking through those exhibits was like traveling through time, with each successive era bringing in greater standards of precision in measurement. Apart from the myriad scientific instruments, there were two sets of objects that piqued my interest then. The first were a set of demonstrators illustrating specific scientific principles - for example, the parabolic path of a projectile discovered by Galileo. The second were a set of anatomical models showing advances in medical science - for example, tools for delivering babies in complicated pregnancies. On first sight, these anatomical models showing messy biology stood awkwardly different from the clean instruments of physics. But there is a link between the two, though this fine museum fails to show it within its modest premises.  That missing link now lies in the British Library in London, in the notebooks of Leonardo Da Vinci. They demonstrate that anatomy was the first modern science and physics followed afterwards.

The great physicist Ernest Rutherford said
All science is either physics or stamp collecting. 
This disdain runs deep in our academic establishment, with physics at the top of the pecking order of departments.  Many fields try to ape physics in their methodology, aiming for overarching theories which condense the "truth" into a small set of equations. But are we doing it completely wrong ?

 In this blog, I will narrate a short history of the science of anatomy -  the original harbinger of precision in modern science. I will argue that we scientists should aim to mimic anatomy instead of physics. By that I mean, we need to pay attention to detail even without any pretence for an overarching theory. This is not mere stamp collecting, but an art of map making that is necessary to understand a complex and dynamic subject, such as the human body. I believe this is particularly important for the field of my own study - computer science, which is arguably leading us into a new scientific age. I once related on this blog a lecture by Michel Serres, where he used the word bouleverser  to describe what computers are doing to our society - destroying everything and remaking them in a brand new fashion: politics, culture, education, economy, science, everything. This radical transformation is comparable to only two preceding moments in human civilization: the inventions of writing and printing.  As we try to step into such a new scientific age, we should understand how this happened before, 500 years ago in Florence.

Awakening of the science of anatomy: 


(Left) An anatomical drawing based on "Anathomia Corporis Humani" by Mondino de Luizi, written in 1316. The drawing is from a later print in 1541. Contrast with the lack of detail as compared to Vesalius, above. (Right) The cover of the book showing practice of dissection for anatomical studies.


Browsing through wikipedia, I discovered a peculiar quirk from history that connects three great scientists. The anatomist Andreas Vesalius was born in 1514 and died in 1564. The astronomer Galileo Galilei was born in 1564 and died in 1642. The physicist Isaac Newton was born in 1642 and died in 1726. Tibetans who believe in reincarnation might very well say that the ghost of the scientific spirit successively reincarnated in these three people. The last of the three, Newton was born on 25th December 1642, which gave rise to a recurring annual joke amongst atheists during Christmas time, that "Of course, we need to celebrate the birth of Newton". Newton revolutionized physics by bringing it into the ambit of mathematics: his glorious three laws of motion. Many school children grow up thinking that this mathematical insight occurred to Newton while he was dozing off under an apple tree when lo, an apple popped on his head. Of course, this story is a myth and not true, but so is the myth that physics was the centre of modern science. To understand what motivated Newton, we need to understand Galileo and his fellow pioneers in precise astronomical observation, such as Johannes Kepler. And to understand what motivated Galileo, I believe we need to understand Vesalius and his fellow pioneers in precise anatomical observation.

By the time of Vesalius, the science of human anatomy was already mature. Crucially, it was Europe that held the advantage in anatomical knowledge as opposed to other great civilizations such as China, India or Arabia. These other civilizations stood superior to Europe in many other fields such as mathematics or astronomy. Scholars of history keep arguing why the awakening of modern empirical science, requiring precise experimental observation, arose in Europe and not in other places. There are many theories, but I believe the answer lies with anatomy, where Europe and specifically Italy, held a distinct superiority over other places. Dissecting the human body was a cultural taboo in many civilizations, as it is still today in many contexts. Ancient India had a vast knowledge in medicine, but cutting the human body was banned due to religious reasons. Instead, Indian anatomists let the body decompose naturally and peeled off the layers using Kusa grass. This was not as effective as using precise surgical instruments for studying anatomy as practised later in Europe. Ancient China had a cultural emphasis on holistic medicine that required studying the entire body, as opposed to observing each organ in detail. This resulted in a lack of a demand for anatomical knowledge. It is not clear whether Arabs performed dissections, but the strong prohibition in Islam of representing the human body in artistic form resulted in a shortcoming of anatomical knowledge.

A strong contrast to all these civilizations is the popularity in Europe of the artistic description of the human form. Sculpture and fine-arts have been valued highly by the Greeks and pursued with equal gusto by the Romans. It can be argued that Greek science and philosophy have stagnated during Roman times, but the arts of sculpture, painting and architecture have advanced immensely. As the artistic standards  rose for the description of human form, many painters and sculptors realized that they needed to understand the underlying muscular structure of the human body. However, the Roman empire prohibited the dissection of the human body, again due to religious reasons. The Greek physician Galen, working in Rome in 2nd century AD, obtained his anatomical knowledge by observing the injured gladiators. When this didn't give sufficient knowledge, he compensated for this by dissecting animal cadavers. For many centuries, his work remained the standard in Europe as further progress was stunted by the ban on dissection. A remarkable element was that the delivery of human babies was studied by dissecting sheep, so much so that amnion - the scientific word for the placental membrane  - comes from the Latin word for lamb. Obviously, this extrapolation from animal studies to human bodies resulted in several errors, which were not corrected until  the 16th century by Vesalius.

Before Vesalius could achieve his mastery of the human anatomy, the gates needed to be opened for scientific investigation of the human body by dissection . The first advantage was given by Christianity. Although it required the proper burial of the bodies of believers,  Christianity removed the sharp prohibition on dissection present in pagan Rome. The atmosphere was sufficiently relaxed by 13th century AD, when an Italian physician named Mondino de Liuzzi was able to perform public dissection of human cadavers. He wrote a text named "Anathomia Corporis Humani" which is considered to be the first true anatomical text, based on observations of the human body.

The later physicians in Italy regularly performed dissections and improved anatomical knowledge. Thus, the text of Vesalius with its detailed anatomical drawings is the outcome of a gradual evolution. The publication of this text was greatly aided by the invention of the printing press by Johannes Gutenberg (born 1398, died 1468). But the science of anatomy was already in the ascendancy by this time. Apart from physicians, sculptors and artists in Italy studied anatomy in a rigorous manner. The teacher of Leonardo Da Vinci (born 1452, died 1519) in Florence, Andrea del Vercocchio (born 1435, died 1488) was a master of human facial anatomy, as can be seen in his drawing below.


Drawing of St Jerome by Andrea del Vercocchio, the teacher of Leonardo Da Vinci

Another great master of human anatomy is the sculptor Michelangelo (born 1475, died 1564). This analysis of human anatomy flourished in Florence and spread from there to the whole of Italy and Europe.  Specifically, Leonardo Da Vinci depicted bone structure and musculature in significant detail, as shown below.


Anatomical drawings of Leonardo Da Vinci from his notebooks


Apparently, Leonardo was so curious to study facial expressions that whenever he saw particularly ugly or grotesque looking people, he followed them around the town in the hope of catching their facial expressions under different emotions. His systematic analysis of the human face is the secret behind his masterpieces - such as the Last Supper and Mona Lisa, with their precise and enigmatic facial expressions.

It is sometimes acknowledged that the year 1543, with the publication of the book De Humani Corporis Fabrica by Vesalius, is the beginning of  the scientific revolution. Another significant event in the same year is the publication of Nicolas Copernicus about the heliocentric theory of the universe (something already discovered by several ancient astronomers). However, we have to note that the work of Vesalius is not the beginning, but a significant milestone of maturity for the science of anatomy. In terms of scientific investigation and experimentation, the other sciences caught up only much later.

The scientific study of anatomy revolutionized surgery, and medicine in general. In his book "Adventures in Human Being", Gavin Francis describes the Victorian medical school building in central Edinburgh in Scotland as follows.

Carved into the stone lintel of the entrance was "SURGERY ANATOMY PRACTICE OF PHYSIC"  The greater weighting given to the word ANATOMY was a declaration that the study of the body's structure was of primary importance, and the other skills we were engaged in learning - those of surgery and practice of physic (medicine) were secondary. 

Edinburgh has a unique and significant place in the history of anatomy. The physician and artist Charles Bell was born and practised here. Francis narrates how Bell was inspired by the drawings of Da Vinci and by his analytical detail in the depiction of facial musculature. Bell described in detail the different facial nerves that animate the human facial expression. One of the medical pathologies that is studied by Bell is that of facial paralysis, now known as Bell's palsy. The complexity of the facial anatomy in the human has led Bell to proclaim that humans are unique with respect to the other animals in how complex their facial expressions are. This model was later criticized by Darwin who also studied in Edinburgh. But Darwin expressed a great fondness for the anatomical work done by Bell and by his predecessor Da Vinci, which undoubtedly influenced the later development of his theory of evolution. Even the observations of Charles Bell on the uniqueness of human facial expression remain relevant to this day, as  facial expression mirrors the complexity of emotional states in the human brain, which is substantially superior to that of the other animals. In more recent times, psychological research by Paul Ekman and others has investigated how the facial expressions in different human cultures, including traditional tribal societies, are remarkably similar to each other and exhibit similar complexity. The synthesis of virtual human characters in films and computer games derives greatly from this work in psychology and anatomy. 3D face modelling for visual effects is also my window into this fascinating field.

Another significant advance in the study of anatomy is the work by Henry Gray, whose classic "Gray's Anatomy: Descriptive and Surgical" first published in 1858, remains an important reference to this day. Many scientists working in other fields have relied on this book to extend their findings to the field of medicine. This open scientific investigation, structured on an open understanding of the human body, has greatly benefited mankind. This is arguably one of the most important gifts of science: in addition to the discoveries of evidence based medicine through randomized controlled trials, and that of antibiotics such as Penicillin. These advances in medicine have saved countless number of lives and alleviated the pain of several more people. In this sense, even from a practical and utilitarian point of view, medicine (and specifically, anatomy) is a greater role model for sciences than physics.


Open source education of anatomy: 

With the advances in body imaging technologies and computational medical sensors, we now have a unique opportunity to extend the understanding of anatomy to the general public. Unlike X-rays and CAT scans, novel imaging technologies such as Ultrasound Imaging and Magnetic Resonance Imaging (MRI and fMRI)  do not use harmful ionizing radiation. Thus, in principle, they can be used extensively for acquiring a lot more data about the human bodies than what we have today. But in practice, they remain extremely expensive and have not yet realized economies of scale. Very few people have deep knowledge of their own bodies, as captured by the latest medical imaging technology. Why is this ?

Very often, it is too late when doctors perform clinical diagnosis of harmful tumours, bone fractures or misalignments in the body. This results in costly late procedures or even in the death of patients. This is terrible, but there is a greater problem that is not apparent to people who are not aware of the possibilities of computational medicine.

It is now possible to capture and analyse the human body throughout its  metabolism, and obtain a detailed understanding of its processes of digestion, locomotion or cognitive control. Obviously, all these bodily processes depend on the exact physical structure and anatomy of the patient, as well as his lifestyle. So the medical advice can be tailor-made to the exact needs and requirements of the patient. This can be achieved by computational analysis of the sensor readings from one patient, and putting them in relation to the statistics from large human populations. All of this computational analysis needs to be grounded on the anatomy of the human body.  Another grounding factor is the DNA of the patient. If we have an open model to represent this knowledge, this can facilitate the development of personalized drugs as well as lifestyle recommendations, which can be prescribed by the doctor in coordination with experts in many different fields. These recommendations can be as detailed as the posture of the body, the cognitive tasks during the day, or how to organize one's home or furniture. This will be in stark contrast to the one-size-fits-all drugs that we currently have in the pharmaceutical market. In fact, future generations will look back at our current medical practice in dismay, similar to how we look at the ancient Roman physicians who delivered human babies on the basis of sheep anatomy.

However, medical diagnosis is not solely a computational problem and we can never replace the role of the clinician. Reading the book of Gavin Francis made me realize this very strongly. There is an underlying strand of compassion and empathy that runs throughout the book, across the various medical cases that he describes. This empathy cannot but be obtained from a dedicated clinical practice. It is the job of the clinician to explain the alternatives to the patient and convey the information in a manner that he can understand. Very often the problems are not clear cut and there is no easy solution. In this sense, the role of the clinician is not much different from that of a teacher. What we now have is a problem of educating the general public, and making them understand their own bodies in a more profound manner.

Despite the great advances in technology, our societal understanding of the human body has not progressed much from the era of Leonardo Da Vinci. We need a new model for anatomy that corresponds to the computational understanding of the human body. We need a method of communicating this model between experts and the general public, in a way similar to how printed books communicated the anatomical drawings of Vesalius. Without these models of communication, we will not be able to exploit the advances of medical imaging technologies and computational medicine. In fact, the situation is far worse. These advances will be used by the nefarious powers on the market who will exploit the gullible public similar to how livestock animals are exploited. The disaster in the food industry, which keeps churning out addictive sugar loaded products, is a case in point. Another disaster is unfolding in the industry of medical supplements and pain medicine, which make the human users addicted to them. Many people use legal drugs such as nicotine and alcohol as pain relief. But the human exploitation is not limited to food or drugs. All social and cognitive activities will be analysed with respect to how they affect the human body, and thus they will be exploited. An important avenue of exploitation, based on advances in neuroanatomy and physiology, is that of our digital lives on the internet,which now hosts a significant chunk of the economy.

How can we have an open-source education of computational anatomy for the general public ? This is not an easy question to answer. In fact, the question is deeply tied to how we communicate computational objects in general. Most people do not understand computers. Even trained computer programmers do not understand complex software. This is the case even for "open-source" software, where the source-code is made public for anyone to see. For most people, this is illegible and a foreign, alien language. If we are not able to communicate relatively simple computational objects like web pages, how can we communicate complex biological models to the public ?

We need to go back to the roots of anatomy and decipher how those artists sketched the human form and musculature. In fact, an anatomical drawing follows a complex aesthetic of visualization. When the body is dissected, the important nerves and muscles are drawn out and presented in a visually comprehensible manner. This is necessary even for medical students, and more so for patients. If we want to show the metabolism of the human body in motion, we need to develop a similar language of aesthetic. Crucially, this language should be able to represent the computational aspects of the metabolism that are relevant for diagnosis. This might seem like a daunting task, but we have a great starting point, which is that people are motivated to know about their own bodies, and they can start from the static anatomical drawings aligned to their medical images. However,  we cannot avoid the job of educating the public. In this regard, we need the effort of artists similar to Leonardo Da Vinci, who can bridge the elite world of scientists to the lay person.

We still have many cultural taboos about anatomical images. Most people associate bone skeletons and inner body images with corpses. Very few people see them as masterful creations of nature, depicting all the dynamics of life. In Germany, there is a traveling exhibition of plasticized human and animal anatomy, known as Körperwelten (Body Worlds) which keeps running foul of  religious groups. Part of the appeal to such anatomy exhibitions is their shock factor and morbid  element. However, we need to grow out of this shock and understand our human bodies in their true dynamic form. We need artists who can bridge this gap.

Anatomy of open-source education: 

The media theorist Marshall McLuhan has theorized in the 1960s that  modern media is fundamentally reshaping our human consciousness. He profoundly influenced many computer scientists who wanted to achieve a fundamental transformation in human society through computational thinking. In order to qualify as a medium for thinking, a computer should not be restricted to an elite set of programmers or engineers, but used by everybody. In other words, this should be as versatile as a book. This vision and dream was the driving force of the pioneering work in personal computing by scientists such as Seymour Papert and Alan Kay. We now have the technological means to provide personal computers for all human beings. However, the way we are using computers today is a complete disaster, and a total sell out of the original dream.

Most people use their computers (now hidden inside their mobile phones or home appliances) as livestock tethered to a pole. In other words, they are willing slaves to an overarching system of control. With repetitive use, they are conditioned to think and behave like captive animals. In this regard, computers and internet are much worse than books, and closer to chemically addictive substances. A loose analogy to the situation today is most people using books entirely to write confession material to the pastor in the church, and never for reading anything, not even the bible. If books were used like that, they would have resulted in a slave society in complete thrall to the powers in the dark. The invention of the printing press by Gutenberg would have simply exacerbated the situation. In reality, this did not happen. The fundamental reason is that the elite scientists have found a way to communicate their knowledge to the general public. This started with the science of anatomy. Today, we face a similar task in computational sciences. I believe we can draw similar inspiration from anatomy, building the first applications in health and environmental sciences, where there exists a significant minority of passionate people eager to know more. These people can be trained to think in a computational manner through applications that visualize the hidden anatomy of the human bodily processes or of the environmental processes. Much as the underlying neural and cardiovascular structure describes the health of the human body, the threads of biodiversity and natural resources describe the health of the environment,

These applications need a method for communicating computational media that can be easily  interpreted and visualized by the general public, which over the course of interaction, reveals the computational intricacies to the user. In other words, we need to decouple how the media is programmed with how it it reacts to the user. But at the same time, we need a guarantee that the user interface produces the same level of computational control as a full-fledged programming language. The work of Alan Kay, the original inspiration behind the movie Tron, remains a hallmark in this regard. Snippets from this inspiring work are now available in the programming languages of Smalltalk and Squeak. Another  example is the work of Bill Atkinson at Apple, who developed Hypercard (eulogized here by Douglas Adams). Although limited in its computational expressiveness, this pioneering work was far ahead of the World Wide Web developed by Tim Berners Lee, in terms of user expressibility. The web, in turn, is far more expressive than the disastrous world of apps we have today. In terms of computational education of the general public, we have regressed tremendously.

In today's world (in 2016),  my personal hopes for an open data format for  educating the general public to think in a computational manner are with iPython notebooks. At present, these notebooks are used extensively by researchers in computational sciences and machine learning to display scientific data sets. With a little bit of prodding, they might be developed to communicating computational data  to the general public. Taking another cue from clinical medicine, we can develop applications to help the users understand their own bodies or lifestyles in an analytic manner. It is very easy to develop apps that make money by getting the users addicted via some psychological weakness. It is much harder to develop apps that don't make as much money, but will educate the public. Without conscious effort from programmers, these will never happen.

What is a paraview : 

This blog post is inspired by my reading of the book by Gavin Francis. However, it also draws from my own personal experiences in my  life and my perspectives as a computer scientist. In this sense, it is something more than a review of the book. I am expressing my inspiration by using the equivalent of anatomy in the human language: etymology, to coin a new word.

In Greek, the prefix "para" refers to something "beyond" or "by the side of". Interestingly, this is one of the prefixes that applies equally to Greek and Sanskrit. Examples in English that use this prefix include paragraph, parallel, parapsychology, paranormal, parachute (first imagined by Da Vinci). I coin the word "paraview" to refer to a "view beyond the subject". I think this is a very useful word to describe a detailed comment enriched with one's own experience or perspective. Ideally, our expressions of the digital lives on the internet should be connected to one another as paraviews, and not organized in a hierarchical list. This imposition of hierarchy on human thought by the world wide web is one of the complaints of Ted Nelson, the originator of the ideas of hypertext and hyperviews. At present, we do not yet have a hyperview browser to inspect the anatomy of online content, as envisioned by Ted Nelson. But at the least, we can try to express paraviews, instead of mere appendages in a hierarchy.