The Case for English

A confessedly well-intentioned article in a recent edition of the Hindustan Times  by the present secretary of the Department of Biotechnology, Professor K. VijayRaghavan, advances the proposition that India as a nation will never attain to the highest echelons of science and technology unless it matches its success in sloughing off political and economic colonialism by doing likewise with intellectual colonialism through a single measure – attenuating the centrality of English.  Professor VijayRaghavan is far too intelligent a polemicist to be caught in the appealing trap of the either or. He simply suggests that if the brightest minds cannot be allowed to put forth their best ideas in the linguistic medium in which they function optimally, then they will always be second class citizens when deploying the language of another, one that has been foisted upon them, in this case, English. Instead, the author argues, why not allow for equal weightage of English and the student’s native language such that there can be a ‘free-flowing mix’ between languages. Not for him the cop-out of the alleged monumental difficulty of the task – no, he proclaims, with focus and investment, such a specious contention can be dismissed out of hand.

 

Is Professor VijayRaghavan right, however? Much as one’s impulse would be to agree with him and set about the task of making arcane knowledge available in everyone of the 22 official languages of the country, the gargantuan nature of that undertaking is precisely the fact that there are 22 languages. These, of course, are the result in the main of a political decision taken over 60 years ago dividing states on the basis of dominant tongues set to scripts with various degrees of difference, which in turn trumped the variations accruing to over 700 dialects. That centralisation was attempted in the afterlude of India’s independence has been evinced time and time again by the cack-handed manner in which efforts have been sought to impose Hindi on an often recalcitrant South (we’re all Madrasis anyway, aren’t we)? When Professor VijayRaghavan speaks of Germans thinking in their own language and yet speaking fluent English such that they can work in that free-flowing manner of which he writes in terms both aspirational and wistful, the point to be made is that German dominates the German landscape in a way that Hindi does not India’s. The Germans do not have to think particularly about Pomeranian or Prussian variants, and when was the last time that the French had to contend with having to explain the intricacies of the Higgs-Boson in Provençal Occitan (please note that the only Nobel given in that language was in fact for literature when Frédéric Mistral won it in 1904, rather than for any scientific discovery) or for that matter the Spanish in Euskera for the benefit of the Basques (we shan’t even attend to the comparable matter in Barcelona given the fissure lines making themselves increasingly manifest around Catalonia), or the Finnish giving the time of day in scientific altar-speak to the Lapps? To his credit, Professor VijayRaghavan speaks about the possibilities of the highest expression of scientific literacy in Rajasthan, Kerala and Orissa (a careful selection of the North and West in one move, and then the South and separately, the East) and yet the question is still begged – who will bell the cat? Having studied French off and on for three decades, I am often staggered at how impoverished the language is when it comes to expressing scientific thought in contemporary time. This, despite the fact that German and French were the scientific lingua franca of the 19th century (a fact which rather undercuts Professor VijayRaghavan’s claim regarding Germany – after all, they did have a strong scientific culture in the first place in their language, with English displacing it only latterly), something that was underscored to me when I was compelled, as a requirement for my doctorate in the History of Science, to pass examinations in both. Professor VijayRaghavan, therefore, suffers from the crime of presentism – English dominates now, as Germany and French did two centuries ago, and before them, Latin. It is possible that English itself may be supplanted – perhaps Mandarin will take pride of place in time to come, again, a language that has rather brutally extinguished the possibilities of other local contenders in a scientific sense through Han domination in China. If that happens, all the intellectual dazzle that emerges from a Telangana or a Meghalaya will still have to suffer the indignity of translation, this time into a language with which we as a nation, do not even have facility because we weren’t historically colonised by our north-eastern neighbour.

 

So are we left, willy-nilly, with English, then? I would hazard yes. For a reason that goes far beyond the global dominance of the language itself. It is rather to do with the cultural baggage that appears to attend the use of the vernacular, such as I have witnessed as a teacher in an ostensibly selective central governmental institute. Global examples relating to the dissemination of science have been met with student calls for a preponderance of Indian examples even when there are none to be had. If insularity regarding the nation state is already being countenanced in English, how much more is it likely to happen at the level of the native language? At a moment in time when the role of the mythic appears to hold sway over our collective consciousness, where like in the motion picture ‘My Big Fat Greek Wedding’, the father proudly claims that every extant invention has emanated from Hellenistic sources, we have, in far more pernicious ways trumpeted likewise across a swath of arenas – aviation to surgery. Hence, while I would dearly wish to hasten to champion Professor VijayRaghavan’s position, ground realities must give me pause. It is only when the context arises for the development of a societal objectivity, which can place scientific, medical and technological development across regions in a manner that evokes curiosity rather than cultural cannibalism, that Professor VijayRaghavan’s hope will stand a fighting chance. Even then, I suspect that the more esoteric terms in science will continue to be in English (if immediately inflected by its US hegemonic variant), used as such in other languages, rather than afforded neologisms in them. This is simply because English isn’t simply an imperial language; it has the concomitant advantage of being able to accommodate the possibilities of that imperium in a manner that none of our state languages can, because they do not enjoy that advantage. They never did, except in some dim distant past, where vestiges of old Tamil can be found in say Bahasa Indonesia. However, I suspect, that this is not the needle in the haystack that Professor VijayRaghavan seeks.

 

John Mathew is an Associate Professor of History of Science at Indian Institute of Science Education and Research (IISER) Pune.

An associated Confluence article on this topic can be found here.

H.G. Wells vs. George Orwell: Their debate whether science is humanity’s best hope continues today

File 20171220 5004 1bmzit.jpg?ixlib=rb 1.1‘Man Combating Ignorance’ – what’s science’s role? Century of Progress Records, 1927-1952, University of Illinois at Chicago Library, CC BY-NC-ND

 

In the midst of contemporary science’s stunning discoveries and innovations – for example, 2017 alone brought the editing of a human embryo’s genes, the location of an eighth continent under the ocean and the ability to reuse a spacecraft’s rocket boosters – it’s easy to forget that there’s an ongoing debate over science’s capacity to save humankind. Seventy-five years ago, two of the best-known literary figures of the 20th century, H.G. Wells and George Orwell, carried on a lively exchange over this very issue.

Wells, one of the founders of science fiction, was a staunch believer in science’s potential. Orwell, on the other hand, cast a much more skeptical eye on science, pointing to its limitations as a guide to human affairs.

Though Wells and Orwell were debating in the era of Nazism, many of their arguments reverberate today in contemporary debates over science and policy. For example, in 2013, biologist Richard Dawkins justified confidence in science in these terms: “Science works. Planes fly. Cars drive. Computers compute. If you base medicine on science, you cure people. If you base the design of planes on science, they fly. It works….” On the other hand, Nobel laureate Peter Medawar famously argued that there are many important questions that science cannot answer, such as, “What is the purpose of life?” and “To what uses should scientific knowledge be put?”

Confronting challenges such as climate change and feeding the 2 billion people who lack a reliable source of food, it might be natural to regard science as humanity’s only hope. But expecting from science what it cannot deliver is just as hazardous as failing to acknowledge its great potential.

H.G. Wells’ fantastical fiction embodied scientific optimism. Frederick Hollyer

Wells: Full faith in science

Herbert George Wells was born in Kent, England, in 1866. After a childhood accident left him bedridden, he discovered a love of reading. He studied and taught science under biologist Thomas Huxley, eventually receiving a biology degree. To supplement his income, he worked as a freelance journalist, publishing his first book, “The Time Machine,” in 1895.

Today Wells, who died in 1946, is best known as a science fiction writer. Among his most prominent works are “The Island of Doctor Moreau,” “The Invisible Man” and “The War of the Worlds.” In his own day, however, Wells was better known as a public intellectual with progressive political views and high hopes for science.

Wells foresaw many of the landmarks of 20th-century scientific progress, including airplanes, space travel and the atomic bomb. In “The Discovery of the Future,” he lamented “the blinding power of the past upon our minds,” and argued that educators should replace the classics with science, producing leaders who could foretell history as they predict the phases of the moon.

Wells’ enthusiasm for science had political implications. Having contemplated in his novels the self-destruction of mankind, Wells believed that humanity’s best hope lay in the creation of a single world government overseen by scientists and engineers. Human beings, he argued, need to set aside religion and nationalism and put their faith in the power of scientifically trained, rational experts.

Orwell: Skeptical of the utopian impulse

Nearly four decades after Wells, George Orwell was born in 1903 to a British civil servant in India. He grew up in England a sickly child, but loved writing from an early age. Educated at Eton, he lacked the resources to continue his studies and became a policeman in Burma for five years.

After returning to England, he began a prolific career as a journalist. His writings explored such themes as the lives of the working poor and the dark side of colonialism, and he also produced fine literary criticism. It was near the end of his life that Orwell published the two works for which he is best known, “Animal Farm” and “Nineteen Eighty-Four.”

Today Orwell is widely regarded as one of the greatest writers of the 20th century. The word Orwellian has entered the language to describe totalitarian governments that use surveillance, misinformation and propaganda to manipulate popular understanding. Orwell also introduced such terms as doublethink, thought police and big brother.

Orwell operated with less lofty ambitions for mankind than did Wells. In reflecting on the utopian impulse, he wrote in “Why Socialists Don’t Believe in Fun” that creators of utopias resemble “the man who has a toothache, and therefore thinks that happiness consists in not having a toothache…. Whoever tries to imagine perfection simply reveals his own emptiness.”

Science isn’t enough

Orwell was not bashful about criticizing the scientific and political views of his friend Wells. In “What is Science?” he described Wells’ enthusiasm for scientific education as misplaced, in part because it rested on the assumption that the young should be taught more about radioactivity or the stars, rather than how to “think more exactly.”

Orwell also rejected Wells’ notion that scientific training rendered a person’s approach to all subjects more intelligent than someone who lacked it. Such widely held views, Orwell argued, led naturally to the assumption that the world would be a better place, if only “the scientists were in control of it,” a notion he roundly rejected.

Scientific expertise didn’t preclude some scientists from being swept up in Nazi fervor.German Federal Archive, CC BY-SA

Orwell pointed to the fact that the German scientific community had mounted very little resistance to Hitler and produced plenty of gifted men to research synthetic oil, rockets and the atomic bomb. “Without them,” wrote Orwell, “the German war machine could never have been built up.” Even more damning, he argued, many such scientists swallowed the “monstrosity of ‘racial science.’”

Orwell believed that scientific education should not focus on particular disciplines such as physics, chemistry, and biology – not, in other words, on facts. Instead it should focus on implanting “a rational, skeptical, and experimental habit of mind.” And instead of merely scientifically educating the masses, we should remember that “scientists themselves would benefit by a little education” in the areas of “history or literature or the arts.”

Orwell is even more critical of science’s role in politics. In “Wells, Hitler, and the World State,” Orwell treats calls for a single world government as hopelessly utopian, in large part because “not one of the five great military powers would think of submitting to such a thing.” Though sensible men have held such views for decades, they have “no power, and no disposition to sacrifice themselves.”

Far from damning nationalism, Orwell praises it to at least this extent: “What has kept England on its feet this past year” but the “atavistic emotion of patriotism, the ingrained feeling of the English-speaking peoples that they are superior to foreigners?” The energy that actually shapes the world, writes Orwell, springs from emotions that “intellectuals mechanically write off as anachronisms.”

 

Science’s promise and limitations: the debate continues

The contrast between these two towering figures of 20th-century literature should not be overdrawn. While championing science, Wells recognized that scientific progress could also lead to human misery. He foresaw the development of immense military destructive power in the atomic bomb, as well as the creation of technologies that would undermine privacy.

For his part, Orwell recognized that without scientific research and technological innovation, the British could not maintain parity with Germany’s rapidly developing military. He did not for a second think that his countrymen should revert to the use of shovels and pitchforks as weapons of war, and he called for adult males to own and know how to use a rifle.

The ConversationYet Wells’ and Orwell’s views on science’s potential did in the end contrast sharply. As Wells saw it, scientific habits of mind were precisely what was needed to rationalize the political order of the world. For Orwell, by contrast, purely scientific ways of thinking left human beings vulnerable to deception and manipulation, sowing seeds of totalitarianism. There is much to hope for from science, but a truly reasonable outlook places equal emphasis on science’s limitations.

 

Richard Gunderman, is the Chancellor’s Professor of Medicine, Liberal Arts, and Philanthropy, Indiana University

This article was originally published on The Conversation. Read the original article.

The ethics of excellence: improving academic research

 

Many will agree that academic research in India needs to be internationally competitive and our institutions feature in rankings lists. Global research and competition are now increasingly diverse and in this scenario, India rightfully wants to be an important player. In pedagogy too, we face a situation of enhanced expectations. There has been a rapid expansion with the setting up of more Central and State universities which includes more focussed institutions such as the Indian Institutes of Technology, Indian Institute of Science Education and Research, Indian Institutes of Management and National Institutes of Technology, enhancing the opportunities for high-quality teaching. Despite the impressive job being done, there is considerable room for improvement.

Excellence as ethics

But what is still holding our nation back from achieving large-scale global academic excellence which is commensurate with our intellectual heritage and calibre? Beyond blaming the government and the bureaucracy, the usual suspects, it is important to look inward and ask whether our academics display an adequate ethical commitment to excellence.

It is rarely appreciated that excellence is an ethical issue. We think of it as something arising from people of calibre coupled with sufficient resources. But how do successful nations spot such people and resources and enable them to achieve their potential? The answer: there is a sincere and stated commitment to cultivating excellence as a goal. Contrasting this with the academic ethos in India raises uncomfortable questions.

Consider this advertisement put out by Stanford University recently: “We seek exceptional individuals who can develop a world-class program of research, and have a strong commitment to teaching at both the graduate and undergraduate levels.” In such institutions, once an excellent candidate is identified, the institution does everything to convince her/him to accept the offer. Loss of the candidate to a rival institution is considered a serious failure, as excellence is seen to be a precious commodity, with the heads of such institutions held accountable.

In India, in contrast, excellence is at best one of multiple criteria in faculty hiring. Though never openly stated, extraneous considerations abound. It is an open secret that these considerations define a large fraction of hiring across India, and often precede considerations of merit. In some places, excellence can actually go against the candidate.

The faults within

One might be tempted to solely blame failed institutions/departments on the calibre of leadership, and, ultimately, the government that appoints such leaders. But the problem persists even in those institutions led by respected academics. The reasons need to be examined. While academics freely criticise personality cults in the political sphere, they are happy to cultivate those of their own. A few individuals, possibly achievers in their younger days, grow into collectors of awards and fellowships and dominate organisations and committees. Factions grow around them. These people, administratively overburdened out of their own choice, make serious judgments without adequate information. Conflict of interest is another, rarely highlighted, problem. For example, within an institution, the leader may provide partisan support for their own subject of expertise and restrain the progress of rivals.

The problem is not just confined to leaders. In many Indian institutions, there is increasing democratic participation of junior academics in hiring and promotions. One hopes that this would propel excellence to the top of the desirable attributes. Unfortunately even in this set-up, research areas that are of global importance are often, out of sheer ignorance, treated with disdain. This is a key point. In the ethics of excellence, ignorance cannot be an excuse. When making decisions affecting the future of one’s institution, it is an ethical imperative to educate oneself on all the relevant facts.

The atmosphere in which academics work has a profound impact on their achievements. Academic leaders need to offer support and mentorship but also impose a standard of excellence. Instead, too often, they veer to an extreme: either scattering resources indulgently or interfering in every minor matter. In the worst cases, they are vindictive towards those who show signs of exceptional achievement.

Study in contrast

Why do we in India accept extraneous considerations that militate against excellence? Of course our political culture is deeply implicated, which makes it ironic when our politicians ask why Indian scientists do not win Nobel prizes. But a part of the responsibility and the power to change lies within the academic community itself. The problem is our collective failure to articulate the goal of excellence and to exert firm pressure on anyone, however important, who blocks the path. The old tale that Indians instinctively behave like crabs, pulling others down, still has well-deserved traction in academia.

This is not to suggest that even developed countries are free of academic politics or these faults. Rather, there are correctives applied from two directions. One is the rank and file of academia which tends to be more professional than ours. Personality cults are met with a sharp push back and conflicts of interest are openly challenged. Even when disputes take place, excellence does not take a back seat. The other corrective comes from the top; institution leaders are evaluated by their funding and accreditation agencies, and made aware that their future leadership opportunities are diminished by every petty action and slipshod committee work. Ultimately, the system is accountable because it is committed to an ethical standard — the standard of excellence.

 

Sunil Mukhi is Chair of the Physics Programme at the Indian Institute of Science Education and Research, Pune, and Chair of the Panel on Scientific Values of the Indian Academy of Science.

This article originally appeared in The Hindu and has been re-posted here with permission of both the author and the paper.

Engaging with Confluence and Studying Vision Documents: A management approach

Editor’s note: Confluence claims to be “an online platform that aims to bring together all stakeholders of science in the society”. However, what does it mean for all stakeholders to come together? In this piece, a Confluence Reader shares his thoughts on some of the forms that this engagement can take, taking a recently published article as an example.

Source: https://goo.gl/AqHAci. License: CC0 Creative Commons.

 

How I see Confluence

I took ‘Confluence’ to mean coming together, of people with ‘CONcerns’ and those who have ‘inFLUENCE’! Confluence also brings to mind, ‘conversation with fluency’, which seems to be becoming rare, but insisted upon by social scholars to be practiced and promoted as an important facet and part of human endeavors, (McNamee and Gergen1998), especially between people with divergent stands and viewpoints, with a view to converge for a constructive purpose.

It also brings to mind ‘considering all, do as you must’, a Vedic saying, with a bias towards action; and Gandhiji’s saying, “the end of a great life is not accumulation of wealth but contribution.”

So, concern, consideration, contribution, conversation etc. are the aspects that comes to my mind while trying to participate in Confluence.

 

An example: my take on technology vision in India

This is in the context of a recent article in Current Science on vision documents in India, which was highlighted on this forum. I recall my corporate experience in a Public Sector Undertaking on vision building and recall how a Harvard Business Review article (Collins and Porras, 1996) played the role of a starting point. This article is still a classic resource on the subject. Others may bring in their experience and resources, and with these as basis, the Technology Vision 2035 may be studied. Thus, with the technical briefing on the subject, some experience in it and a historical perspective, it appears possible to form an ‘approach’ for studying the vision.

There could be many approaches, certainly as many as the number of participants. For example, Gandhiji’s ‘Swarajya’, treated as the vision of the nation during freedom struggle can be considered a model. The progress made in Technology Assessment in other parts of the world can be another basis, as the authors of the Current Science article have pointed out.

However, in my view, it is also desirable to have a positive attitude towards those in ‘positions of responsibility’. Recalling ‘Thirukural”, “Entrust work to men, only after testing them. But after they have been so appointed, accept their service without distrust. It is wrong to choose men without care and equally wrong to distrust men whom you have chosen.” This is because continuity is one of the requirements for creating a far-reaching vision. As pointed out by Collins and Poras (1996): “The rare ability to balance continuity and change – requiring a continuously practiced discipline – is closely linked to the ability to develop vision. Vision provides guidance about what to preserve and what to change.” (Collins and Porras, 1996).

In this context, it may be remembered that vision building is a management topic. Here, what is important is progress, achievement, movement etc.; not a never-ending contemplation. Rather than watching things happen, it is more interesting to make things happen by starting from where one is and progressing and improving as one moves, embracing enablers and bracing the hurdles. With what we call the ‘process approach’ in management, it would be an interesting endeavor. But what is the process approach?

Like any big organization, the government and the bureaucracy are divided into departments. However excellence at the level of individual departments do not necessarily lead to the best performance at the level of the whole organization. This is due to lack of attention to the interconnections and the interactions, which lie in no man’s land, especially over a period of time. Process Approach recognizes the importance of the interconnections and interactions and helps to systematically manage them, howsoever spatially and temporally distant they may be. That is why, a monitoring scheme, routed in the tenets of a process approach, might be most useful for realizing our technology visions.

Especially with respect to Technology, it may also be vital to know where the world is moving towards, and against that, see our present state and visualize whether the vision would help us to move forward along with others.

If people from each sector review the vision for their sector and come up with their feedback, collectively it would become a review of the vision as a whole, including the governance/management aspects, that the authors have focused upon, agonizingly. Confluence can certainly be a catalyst, however complex the issue is considered as of now. There is time, and time is on our side. Nothing would be insurmountable if people whom we try to treat as ‘other’ is treated as ‘one of us’, such that there is a feeling that ‘we are in it together’.

 

This is how Confluence appears to me as a common man! In all, it appears to involve a good lot of reading, reflection and writing one’s thoughts down for sharing and collaborating in Confluence!

References

Collins, J. C., & Porras, J. I. (1996). Building your company’s vision. Harvard business review74(5), 65.

McNamee, S., & Gergen, K. J. (1999). Relational responsibility: Resources for sustainable dialogue. Sage.

 

Anbazhagan SV is a science graduate, now retired, with work experience in the area of Industrial Engineering, O&M, HRD and ISO Standard based management systems, in GKW and KIOCL.

Why does our society lack scientific temper and what can scientists do about it?

Source: Creative Commons CC0Read this article in Hindi

Scientific temper is, the habit of coming to conclusions and making decisions based on evidence, reason and logic and not, having blind beliefs, being superstition and believing in supernatural powers. Why is scientific temper lacking in our society, why are blind faith and superstition so prevalent, and, what can we do about it? I would argue that both scientists and ‘non-scientists’ are to blame. I am soon going to argue that the classification of people into ‘scientists’ and ‘non-scientists’ is absurd and should be done away with, but while I am still using that classification, let me make one more point. As a practicing scientist I would prefer to reflect on the fraction of the blame that lies with scientists and how scientists can help change the situation for the better, rather than lay blame on non-scientists. In this regard I wish to make three points.

 

1. My first point is that we unfortunately project science merely as a body of knowledge. Science is a body of knowledge but in my opinion that is incidental. Science is primarily a set of methods, a tool-kit which we use to generate knowledge. In the method of science we make observations and experiments and use evidence, logic and internal consistency to make decisions. More importantly, we are allowed to question and re-question everything – there is no final authority and no final answer. Science is thus always a work in progress; all answers are tentative and can be called into question at any time and by anybody. This is the method of science but this is not what we are projecting as science. We do not teach the scientific method in our schools. Instead we burden our children with facts after facts, we burden their backs with bags full of books containing facts but we do not tell them how did we come to know all these facts, or indeed any fact. If you ask a high school student who has passed class 10 or indeed ask his or her teacher, I think they will be hard put to define exactly what the scientific method is. This is where the problem begins. And the problem continues even when scientists discuss among themselves; we are mostly busy describing the products of our research and do not sufficiently emphasize the methods by which we obtained those producers. I would argue that the process of science is far more important than the product because the product may be of interest only to a few specialists but the process should be of interest to a much wider group of people. If you ask scientists about what other scientists have discovered they will tell you a great deal but if you ask them how these discoveries were made, they will be able to tell you very little. Taking my own field of research (the study of animal behaviour) as an example, I always have much to learn from the methods of knowledge production employed by geneticists, epidemiologists, psychologists, anthropologists, ethnographers, sociologists, economists, historians and even political scientists, even if I am uninterested in their actual findings.

Whether it is the claim that, there is a 70% chance that it will rain tomorrow, that there is water on Mars, that the earth is 4.5 billion years old, that we have discovered the Higgs Boson or that a little bit of red wine reduces the risk of heart disease, most of us take science’s predictions as articles of faith. Scientists say so therefore it must be true – scientists insist and we acquiesce that we cannot understand the reasoning and logic behind these claims. This equates science with mythology and superstition. Those who take claims from mythology and superstition as articles of faith do not feel that they are any different from those who take the claims of scientists as articles of faith. There is a second equally serious problem. Claims made by scientists when taken on faith do not come with error bars and estimates of risks of failure. We live in an imaginary binary world of truth and falsehood. When it rains as was predicted we praise scientists and when it rains despite prediction to the contrary, we become critical and suspicious of scientists. Some discussion of the scientific method, of the logic and reasoning behind the claims, however rudimentary, will help appreciate the risks involved and the probabilistic nature of the claims. Indeed, I believe it will generate admiration for the scientist and the scientific method, even in the face of failure – consider the complexity and audacity of the task of landing a spacecraft on a moving target (Mars) 54.6 million km away or the equally complex and audacious task of taking a billion numbers and subjecting them to a trillion operations to publish tomorrow’s weather prediction in the newspaper for all to see and pass judgement!

 

2. My second point, which I already alluded to, is that we must do away with the distinction between scientists and non-scientists. I am a Professor at the Indian Institute of Science. I have a Ph.D. in Science. I teach courses in science. I was the President of the Indian National Science Academy. So, by all accounts I am a scientist. But is that always true? Am I a scientist 24/7? Do I use the scientific methods for everything? The answer is a clear No. I do not use the scientific method when I decide what music I would like to listen to, when I decide which restaurant I should go for dinner or what colour shirt I should wear. So-called scientists use the scientific method some times but not always. Similarly, so-called non-scientists also should use the scientific method sometimes though not always. This raises the question of when we should use the scientific method and when we need not? If we want to know whether smoking cigarettes increases the risk of cancer, we must use the scientific method, if we want to decide whether a little bit of red wine actually reduces the risk of heart disease, we must use the scientific method, if we want to decide to learn how to put a space craft on the moon, we need the scientific method and if we have to decide whether or not Indians practiced inter-planetary travel thousands of years ago, we must use the scientific method. It is perfectly alright for me to say I like Hindustani music more than Carnatik music without scientifically justifying it, but it is not alright for me to say that genetically modified (GM) crops are bad for us or indeed to say that GM crops are good for us. In the latter case we need to use the scientific method. All of us should use the scientific method where it is needed, whether we are so-called scientists or so called non-scientists – the distinction between scientists and non-scientists is absurd. Moreover this distinction creates an unnecessary and unhelpful hierarchy in the society. It is meaningless to say that scientists believe that GM crops are good and non-scientists believe that GM crops are bad. Our decision about whether GM crops are good or bad should depend on the evidence and not on belief. It will also be helpful to break down the presumed hierarchy between scientists and non-scientists if we boldly advertise the fact that even professional scientists do not use the scientific method 24/7.

 

3. All this leads me naturally to my third point which is that we must create a situation where everybody can be a scientist when it is necessary to use a scientific method and everybody can afford to be a non-scientist when it is not necessary to use the scientific method. If we want everyone to make evidence-based decisions, we must make it possible for everyone to have access to and be able to understand the evidence. And that is why we must teach science as a set of methods and not merely as a body of facts that scientists have discovered by some kind of magic and happen to believe in. Only then can we create scientific temper in the society and remove blind faith and superstition. Science education should empower a school child who is told that an idol of Lord Ganesha has begun to drink milk, to apply the scientific method – observation, experiment, logic, internal consistency and a questioning and disbelieving attitude – to decide whether what he or she has been told is plausible. Scientists can do a great deal indeed to foster scientific temper in society.

 

This article is based on a presentation made during the Panel Discussion on “Scientific Temper: A Prerequisite for Knowledge-based Society” organized by Rajya Sabha Television (RSTV), Council of Scientific and Industrial Research-National Institute of Science Communication and Information Resources (CSIR-NISCAIR) and Vigyan Prasar, on Sunday, 10th January 2016, in Vigyan Bhavan, New Delhi.

 

Raghavendra Gadagkar is a Professor of Evolutionary Biology and ex-President of Indian National Science Academy, New Delhi.

Biological mass extinction far more serious than previously thought?

– Does the tempest halt for the sake of your blindness?
From the poem Utpakhi (The Ostrich); Dutta, Sudhindranath

Source: Wikimedia Commons

A mass extinction episode is a global phenomenon during which more than seventy-five percent of Earth’s wildlife goes extinct. And, over the last half a billion years, five such mass extinction episodes have decimated Earth’s wildlife. The most recent being the one that wiped out dinosaurs forever. But how many of us are aware that several recent research studies1,2 have time and again claimed that Earth is in the throes of yet another mass extinction episode: the sixth mass extinction.

Indeed, over the last 100 years, 200 species of vertebrates have gone extinct3. This extinction rate, of two species a year, may only elicit a shrug of your shoulders and may not sound alarming: Animals go extinct all the time, such is the way of nature. But when one considers the rate at which animals usually went extinct in the last two million years, two hundred species would not have taken a hundred years but ten thousand years to go extinct.

The extinction rate, in other words, has risen drastically – by almost 100 times – as compared with the eras before and in too short a window of time, the last one hundred years.

A research article3, published recently in the journal PNAS [Proceedings of the National Academy of Sciences], however, goes one step further with its claims regarding the sixth mass extinction.

This study reports that not only is the unsheathing of the sixth mass extinction a contemporary phenomenon, but it is also a lot more severe in magnitude than previously thought. Furthermore, this study also alludes to the fact the human animal seems to be solely responsible for this one. It is interesting to note that even the first mass extinction, which occurred much, much before the first human being came into existence, was caused not by a shower of meteors or by great volcanic eruptions but by animals8.

“In the last few decades, habitat loss, overexploitation, invasive organisms, pollution, toxification, and more recently climate disruption, as well as the interactions among these factors, have led to catastrophic declines in both the numbers and sizes of populations of both common and rare vertebrate species,”3 write the authors – Gerardo Ceballos, Paul R. Ehrlich, Rodolfo Dirzo – in their research article. Furthermore, according to the study, in the last hundred years, “50% of the number of vertebrate individuals that once shared Earth with us3 have already been wiped out”, and such a catastrophic decline of animal populations thus indicates that the sixth mass extinction is already under way.

But why should we, as humans, care even if we lost more than 75% of our animals through mass extinction? Because given how interdependent different animals are in the ecosystem web – Domino Collapse – even humans would be adversely affected.

Take insects, for example. If tens of thousands of species of insects go extinct before the environment can adapt to their absence, then thousands of species of trees too would disappear as many species of trees are dependent on insects for pollination. And if trees disappear, the death knell would be all but sounded for the human race.5 Our air would be filthy, toxic, and the ambient temperature of the Earth would rise by several degrees. The rate of soil degradation would rise, and so will the rate of soil erosion resulting in the loss of arable land. Rainfall would be seriously affected, and in turn, so would be the quality of our freshwater sources. The human animal, assuredly, would be negatively hit.

Therefore, given that this mass extinction could prove to be a credible threat to human survival, it is rather discomfiting to realize that we – the primary reason behind this extinction episode – appear to be blind to it. Two reasons explain why?

The first reason: The two species that disappear every year are ones that either are not alluring enough for us [not as alluring as the lion, for example] or live in isolated corners of the world, and, hence, we never really feel their loss.

For instance, do you know of the Catarina Pupfish? Or of the Christmas Island Pipistrelle? The Pyrenean Ibex? All three disappeared for good in the recent past, but how many of us are aware of this fact.

The second reason that contributes to our not being fully informed of the threat of mass extinction is that previous research studies have focused strongly only on the ‘end point’ – the complete extinction of animal species – to deduce the health of Earth’s wildlife. And this approach, the study in question highlights, is the fundamental reason why previous studies have underestimated the magnitude of the present mass extinction. The following paragraphs explain this in greater detail.

Consider, for the sake of argument, an animal species ‘X’. This animal X is spread across fifteen different countries in Asia. This animal’s ‘global population’, in other words, comprises ‘local populations’ spread across fifteen different countries. There would be one local population, of a certain number of individuals, of animal X in one country; another local population, of a certain number of individuals, in another country, and so on.

Suppose, after a year, we observe that out of the fifteen local populations, fourteen have been wiped out – local population extinctions – and only one last local population exists.

In such a scenario, if we were to concentrate only on the ‘end point’ – the complete extinction of this animal species – we would simply put a tick mark against animal X and report it as ‘not extinct’ since that one last local population still exists. And here lies the problem.

This strict binary of ‘extinct or ‘not extinct’ is simply too coarse a method to capture the true health of an animal species in the wild. In this case, for example, by simply labelling animal X as ‘not extinct’ we fail to capture the critical detail that animal X is faring miserably because its local populations have suffered from a drastic decline: In this hypothetical scenario, from 15 to 1. Hence, the underestimation of the magnitude of mass extinction.


Therefore, according to the study in question, to better gauge the true estimate of the magnitude of the present mass extinction event, it is imperative that one should focus more on the extinctions of the local populations, and not just on the global extinction of a species. Just because an animal lives, does not imply it thrives.

All previous studies have followed the above, albeit flawed, approach: of concentrating only on the ‘end point’. And thus these studies suffer from the misconception that the phase of biodiversity loss is only just beginning, and humankind has the luxury of several decades to counter it. The rate of extinction, after all, is only two animals/per year. But the study in question asserts the contrary through rather aggressive rhetoric: “We emphasise that the sixth mass extinction is already here and the window for effective action is very short, probably two or three decades at most.” Probably, even less.

Of course, this is not to say that humankind will go extinct within a few decades; but that if we do not begin to act right now, then this mass extinction will prove to be irreversible after two or three decades. Consequently, the extinction of the human race will, in all likelihood, closely follow.

To back such a strong claim, this study follows a novel approach. It takes into account, unlike prior studies, two important factors that directly precede the ‘end point, i.e. the global extinction of an animal species. First, as delineated above, the decrease in the local populations of different animal species over the last hundred years. Second, the shrinkage of their geographical habitats over the same time. [Note: For this study, the local populations of 27,600 vertebrates were considered, which is a small fraction of the total number of animal species in the world, about 8.7 million7.]

The study reports that over the last hundred years about one billion local populations of different animal species have gone extinct. One. Billion. Again, to repeat, this does not imply that one billion animal species have gone globally/completely extinct in the world; but that a total of one billion local populations of different species have been wiped out forever in certain regions of the world while still probably being present in other regions of the world. Hence, ‘local’ extinction.

Thousands of local populations of the Asiatic Lion, for instance, were once found all over India. But over time, all these local populations – along with the one billion local populations of other animals – have gone extinct. Today, the last few local populations of the Asiatic Lion are now only found in an isolated pocket, the Gir Forest, situated in the state of Gujarat. Once these last few local populations go extinct, the Asiatic Lion will be declared ‘globally extinct’, not to be found anywhere in the globe.

The study discovers another disturbing fact.

Over the last hundred years, 32% of Earth’s vertebrate species have decreased in population size and range. Of the 177 mammals analysed in the study, all have lost 30% or more of their geographic ranges, and the ranges of more than 40% of the species have decreased by more than 80%. Even animal species – categorised as ‘Least Concern’ – too have been suffering from similar woes and are tending inexorably towards the ‘Endangered’ category.

Our data indicate that beyond species extinctions, Earth is experiencing a huge episode of [local] population declines and extirpations, which will have negative cascading consequences on ecosystem functioning and services vital to sustaining civilisation. Humanity will eventually pay a very high price for the decimation of the only assemblage of life that we know of in the universe.”

Therefore, even if only ‘two’ animal species are going globally extinct every year, this number – ‘two’ – belies the drastic and widespread extinction of hundreds of thousands of local populations of different animal species all around the world. Eventually, such local population and habitat losses will lead to the global extinctions of thousands of animal species over a short duration of time: mass extinction.

So, what do we do to counter this wave of local population extinctions, and thus in turn stall the sixth mass extinction?

To answer this question, one may talk of wildlife conservation programmes. Of ombudsman bodies. Of national policies. Of international partnerships. But the real answer to this question is one that we – and any high school student – have known too well and for far too long. An answer that has been repeated so many times that it seems to have lost its potency:

Reduce consumption3. Rein in population3.

Considering the unpalatable claims of the study, however, it should come as no surprise that certain scientists have taken issue with it and questioned its findings, and not without good reason.

These scientists find the study to be ‘alarmist’ and accuse it of crying Wolf! because mass extinction events are far more severe in their intent. These sceptics4 believe that since mass extinction events unfold over hundreds of thousands of years, it is simply too soon to convince ourselves we are knee-deep in another one. The study, after all, has considered the number of animal extinctions, and that too only of vertebrates, over only the last hundred years – a blink of an eye when compared to the duration of previous mass extinctions. What of insects and other non-vertebrates? By focussing only on 27,600 vertebrates, a small subset of the total 8.7 million animal species, the study may be reporting a mass extinction event that is far more severe, even exaggerated, in magnitude than what it actually is.

Furthermore, one may even argue, that the human animal may be able to engineer solutions to alleviate the ill effects of mass extinction. Again, as mentioned in the previous paragraph, it would be wise to note that a biological mass extinction event, after all, is not a sudden guillotine that would end all humankind with one swift fell swoop. Yes, thousands of animal species do go extinct at a high rate and over a ‘short window’ of time, but this short window of time stretches over hundreds of thousands of years. And this window of time could be time enough for the human to develop robots to pollinate trees. To artificially synthesise meat, if the world’s fisheries were affected. To even create disruptive technologies that could mimic the role of otherwise extinct plants and trees to ensure the atmospheric oxygen, along with other essential gases, is maintained at a certain vital level. By then, we may have even colonised other planets. So, the threat of mass extinction may not be as bad as it sounds, at least not to the human.

Yes, this study does claim that if we do not act now – two or three decades, at most – the mass extinction would be irreversible. Consequently, soon, the rate of local population extinction could rise cancerously from what it is now and result in a precipitous drop of biodiversity, thus threatening human survival.

But will the drop in biodiversity be quick and significant enough to outmanoeuvre human technological growth? This is one question that needs to be considered before we sound the alarm.

Yet, dare I say, since we have already wiped out 50% of our vertebrates over the last hundred years3, we could soon lose another 50% if something is not done. And, personally, given the momentum of extinction, I wonder if there will be time enough for the human animal to cope with it – through technology, or otherwise. Furthermore, such a rate of biodiversity loss could prove to be something that will be far greater than even the most catastrophic of the previous mass extinction episodes:

This makes it even more urgent. All previous mass extinction episodes spanned over hundreds of thousands of years, but this mass extinction is happening now, and over only a few hundred years!” Said Gerardo Ceballos, the lead author of the study, in an interview with me.

He continued: “And what of the sceptics you speak of? I know of only two scientists who say that there is no sixth mass extinction. If we do not act within the next two or three decades to reverse this mass extinction, our annihilation would be certain and complete. In fact, what is the point in waiting for the mass extinction to worsen in order to satisfy ourselves, beyond doubt, that it really, really is happening? Because then it would be too late for us to do anything. It is rather trivial to wait for the extinction to get over and say ‘Oh look it really did happen’, because by then we would have been long gone. We must act. Now.”

 

The tempest rages and gathers momentum, but the ostrich burrows her head deeper into her solace of ostensible peace. The ostrich also seems to forget that her wings cannot bear flight. Her limbs are fast, but only so fast.

 

Citations:

1Ceballos, Gerardo, Paul R. Ehrlich, Anthony D. Barnosky, Andrés García, Robert M. Pringle, and Todd M. Palmer. “Accelerated modern human–induced species losses: Entering the sixth mass extinction.” Science advances 1, no. 5 (2015): e1400253.

2Wake, David B., and Vance T. Vredenburg. “Are we in the midst of the sixth mass extinction? A view from the world of amphibians.” Proceedings of the National Academy of Sciences 105, no. Supplement 1 (2008): 11466-11473.

3Ceballos, Gerardo, Paul R. Ehrlich, and Rodolfo Dirzo. “Biological annihilation via the ongoing sixth mass extinction signaled by vertebrate population losses and declines.” Proceedings of the National Academy of Sciences 114.30 (2017): E6089-E6096.

4https://www.theatlantic.com/science/archive/2017/07/maybe-were-at-the-start-of-a-sixth-mass-extinction-after-all/533124/

5https://www.nature.com/scitable/blog/our-science/no_trees_no_humans

6Kalinkat, Gregor, Sonja C. Jähnig, and Jonathan M. Jeschke. “Exceptional body size–extinction risk relations shed new light on the freshwater biodiversity crisis.” Proceedings of the National Academy of Sciences (2017): 201717087.

7Mora, Camilo, Derek P. Tittensor, Sina Adl, Alastair GB Simpson, and Boris Worm. “How many species are there on Earth and in the ocean?.” PLoS biology 9, no. 8 (2011): e1001127.

8Darroch, Simon AF, Erik A. Sperling, Thomas H. Boag, Rachel A. Racicot, Sara J. Mason, Alex S. Morgan, Sarah Tweedt et al. “Biotic replacement and mass extinction of the Ediacara biota.” In Proc. R. Soc. B, vol. 282, no. 1814, p. 20151003. The Royal Society, 2015.

 

Somendra Singh Kharola is a published poet and a freelance science writer based in Bengaluru. 

The unbiased reviewers – do they exist?

Can scientists really be holier-than-thou?

“It’s been four weeks since Sneha submitted her paper and she is in that nail-biting, insomnia-inducing, and anxiously-checking-your-mail-every-few-minutes phase. Only thing one can do at that point is hope the reviews will be impartial and not too harsh!”

All scientists know this gut-wrenching and helpless feeling where you can do nothing but wait when your paper is out for review. Peer reviewing is a bridge every researcher has to cross in order to communicate their findings and its validity to the scientific community. Although it is a bench mark of research quality, it is by no means a flawless concept. It assumes that all scientists are altruistic, moral abiders of the scientific norms. Universalism, one of the four scientific norms, points that scientific validity is free from the sociopolitical status or personal attributes of the individual. But, violation of all forms of norms is a common practice, and scientific norms and scientists are no exception.

Single-blind peer review is the conventional or the most common form of peer reviewing where the reviewers know who the authors are, but the authors are blind to the identity of reviewers. The alternatives to this form of peer reviewing are open or double blind reviews. Although open review process is more transparent, there has been debate that the knowledge of author names and affiliations in both single and open reviews may lead to a more biased assessment of the scientific work.

 

The experiment

To test possible bias in single blind versus double blind peer reviews, Andrew Tomkins and colleagues from the Department of Computer Science, Tsinghua University, Beijing conducted a controlled experiment for papers submitted to the 10th Association for Computing Machinery International Conference (WSDM 2017) which was published in PNAS November 2017 issue [1]. They chose this model as computer science research is first sent to peer reviewed conferences rather than journals. An expert committee was set up which reviewed full-length submissions and four members reviewed each paper. Two of the reviewers had access to author information (single blind), while the other two had none (double blind). Reviewers considered each manuscript and entered a bid (yes, no, or maybe) on whether they were willing to review the paper. They then scored the manuscript ranging from +6 (strongly recommended to accept) to -6 (strongly recommended to reject). The reviewers also provided a ‘rank’ which ranged from 4 (top paper seen by the reviewer) to 1 (the bottom 50% of the manuscripts seen by the reviewer in their batch). Subsequently, the authors collected information as to how single and blind reviewers differed in bidding, reviewing, and entering scores for each paper.

 

The authors wanted to test three particular forms of biases: the Matilda effect, where the papers from first author male papers are given higher scientific credit; the Mathew effect, where famous authors get more recognition; and the third, where biases may emerge due to the acclaim of affiliated institutes. To test these biases, they selected the following covariates for the analysis: female author, famous author, paper form a top university, paper from top company. Tomkins et al. found that the odds were higher for a single blind reviewer to give a positive score to a paper if was from a famous author (odds: 1.6), famous university (odds: 1.58), or a top company (odds: 2.1). They also found that single blind reviewers bid less frequently (22% less than double blind reviewers) and preferentially bid for papers from top universities and top companies. Although they did not find that single or double blind reviewers treated papers from first female author papers differently in this paper, they also conducted a meta-analysis where they analysed their study along with other studies on the effect of gender on reviewing, and there they found significant results.

 

Thus for the same paper, a reviewer who knows that it is from a reputed author or university has a higher likelihood of recommending it than a reviewer who does not have access to this information. Also, if a paper is from a reputed author, university, or company, more single blind reviewers bid to review it and thus, it may be assigned to more knowledgeable or field appropriate reviewers which may further alter the chances of acceptance or rejection of a paper.

 

If the single blind review process seems so murky, is double blind the way to go? If only it were so simple.

 

Wading the tricky waters of double blind reviews

In 2009, an international and cross-disciplinary survey was conducted across 4000 researchers where majority of the researchers (76%) said that they considered double blind peer review “the most effective form of peer review”. But double blind review process has its pitfalls. In most cases, the reviewer can guess the author affiliations if it is from a well-known group. Some journals recommend removing phrases such as ‘we previously showed’, but it is hard to be anonymous in today’s globally connected world as most research groups discuss published and unpublished work in ongoing conferences. Also, most of the papers in physics, math, and computer science are pre-published in e print archive arXiv which makes the anonymity impossible.

Since June 2013, Nature Geoscience and Nature Climate Change have been offering double blind reviews as an option and later it expanded the option to all its journals. Recently, Nature Publishing Group presented the analysis on double blind peer reviewing using data from 25 Nature-branded journals from March 2015 to February 2017 in the Eighth International Congress on Peer Review (London). Out of 106,373 submissions, only 14% of authors opted for double blind reviews in Nature, 12% opted for it in sister Nature journals, and 9% opted for it in Nature Communications. Of those opting for double blind reviews, 32% were Indian authors, 22% were Chinese, 8% from France, and 7% from USA. This suggests the double blind route is preferred by scientists who fear potential discrimination. Ironically, only 25% of papers submitted for double-blind review were accepted compared to 44% submitted for single-blind review.

 

In search for a fair scientific community

Even though studies suggest biases in single blind peer review process, it is still regarded with skepticism amongst the researchers with only one in eight going for double blind reviews.  Also, it has a few kinks in the ‘double blindedness’ with the existence of preprint archives and conferences.

 

So is there any point in double blind reviews?

 

Universalism states that scientific validity should be free from the sociopolitical status or personal attributes of the individual. But scientists, like all humans, harbor conscious and subconscious biases, and double blind peer reviews maybe a small effort, if only in principle, remove them.

 

References:

[1]      A. Tomkins, M. Zhang, and W. D. Heavlin, “Reviewer bias in single- versus double-blind peer review.,” Proc. Natl. Acad. Sci. U. S. A., vol. 114, no. 48, pp. 12708–12713, Nov. 2017.

 

P. Surat Saravanan has a PhD from Tata Institute of Fundamental Research (TIFR), Mumbai and is currently, a freelance science editor and writer.

Should we ban PET bottles?

Source. Wikimedia Commons.

A recent article in DNA reported that the Maharashtra government plans to introduce a ban on plastic bottles, starting with government offices and educational institutions in March 2018, and then eventually banning these everywhere else.

Most of the plastic bottles that we encounter are PET bottles, made from a plastic called polyethyleneterephthalate.  India produced about 1.5 million tons of PET in 2015-16.  Almost all the PET produced goes into making the roughly 7 billion PET bottles that are consumed globally each year.  These bottles are designed as “use and throw” disposable objects.  This is clearly an unsustainable proposition, at odds with the paradigm of a circular economy where we minimize waste by increased reuse and recycling.

Therefore, is banning the use of plastic PET bottles the way to go?  Unfortunately, this isn’t a particularly straightforward question to answer.  While PET bottles are mostly used only once (or, in our country, a few times), the bottles constitute a valuable component of the waste stream.  What this means is that there is sufficient economic value in used PET bottles for it to be worthwhile for rag-pickers to separate these out for resale to recyclers.  Therefore, unlike other packaging plastic materials such as thin polythene plastic carry bags, a large fraction of the used PET bottles are recycled and reincarnated as fibers for carpets, etc.  So, while we do use a large number of PET bottles, a reasonable fraction of these, at least in India, are collected and recycled – in the process supporting an industry that employs a large number of people.  Finally, is there a feasible alternative to PET bottles?  The DNA report quotes the Maharashtra Environment Minister, Mr. Ramdas Kadam saying that mineral water producers could switch to glass from PET.  Is the use of glass bottles more environmentally friendly than PET?  Glass is significantly heavier than PET, thus leading to increased vehicular emissions during transportation.  Glass is also significantly more brittle as compared to PET, increasing the chances of breakage during transportation/use.  Therefore, the choice between PET and glass bottles for high volume, mass consumption applications is not an easy one and is worth debating.  Should we as a society move away from the increased use of PET bottles?  Is there such a thing as responsible use of PET bottles?  Do feasible alternatives exist to PET, or can such alternatives be created if we were to establish a time frame to phase out PET bottles?

How can a Medical Doctor be a Mythologist?

Source: Steve Rainwater. CC 2.0

I am often asked the question why as someone who studied medicine would I be interested in mythology. That is like wondering why children at school are taught literature and music, not just science and maths. Somewhere in the 19th century, we divided knowledge into scientific knowledge and non-scientific knowledge, and the former was placed on a higher pedestal, as ‘real’ knowledge. It’s a rather silly idea that many people cling to even today.

 

To me personally science is a process of thinking based on observation, measurement, experimentation and reproducibility. This works very well when dealing with material things. Not so well when the mind comes into the picture. Uncertainty and complexity seems to increase when we move from the realm of physics and chemistry to the realm of biology to the realm of psychology. Not that physics is an exact science; scientific knowledge is always provisional, based on current state of measurement and data available. New data can upset the current understanding of the world. So all knowledge remains uncertain, restricted by assumed standard conditions and contexts. Few people understand this provisional nature of science and assume science to be absolute, objective, ontological, a replacement for religion. That is the root cause of the tension between scientific and non-scientific knowledge in society.

 

In the 19th century, as science and technology emanated from Europe and gave Europeans a distinct advantage over the rest of the world, the Church felt extremely threatened. For suddenly, there was no doubt that the world could not have been created in seven days, and the earth was neither flat nor the centre of the world, and the world was not a fixed locked unit, like a clock; it was constantly expanding towards infinity, and would eventually start imploding onto itself. Suddenly the certainty of the Bible was in doubt. Christian missionaries had accompanied colonisers to justify colonisation on religious ground: replace the false gods and the heretic faiths with the one true God and faith of Christianity. This was the age when polytheism was seen as false, hence mythology, and monotheism was seen as true, hence religion. This artificial divide still continues in the world with people equating Hinduism with mythology and Christianity with religion. That is why many Hindus get upset when I use the world mythologist to describe myself. They assume I am talking only about Hinduism that I consider myth. This is a hangover of 19th century anxiety.

 

In the 20th century, science and technology became such a major force that it was clear that monotheism was as much  a myth as polytheism. God, one or many, could not be observed or measured; they could not be ‘real’ therefore, hence they had to be ‘myth’. Faith in god was faith in myth. Religions were myth. But what about Buddhism, or Jainism, where there is no concept of God, and so are essentially atheistic? Buddhism acknowledges rebirth in the Jataka tales, and Jainism speaks of soul. Neither rebirth nor soul can be observed, measured, tested or reproduced. There is no scientific basis to them. But does that make it false knowledge?

 

False knowledge sounds suspiciously like false gods. The tone used by some rather shrill scientists or science fans, such as Richard Dawkins or Neil Degrasse Tyson, is suspiciously like Chirstian evangelists telling people to let go of false belief and embrace true belief. Structurally, they are treating science as a religion. They ask you to ‘believe’ in science, as missionaries ask you to ‘believe’ in God. They mock ‘false knowledge’ as missionaries once mocked ‘false gods’.  They demonize, dehumanize, villainize, everyone who does not agree with their mode of thinking. I feel this is the result of science being nurtured in Europe where Christianity thrived. This combative and militant approach would not have been present if science had thrived in India where Jains spoke of anekanta-vada or doctrine of multiplicity, or Buddhists spoke of nicca, or transience, hence comfort with uncertainty, and Hindus spoke of mithya, limited truth as opposed to satya, which is limitless truth.

 

Un-scientific knowledge is not false knowledge. Every scientist knows that there is a range of ‘science’ amongst various scientific disciplines. Hard science such as physics and chemistry and biology is far more based on measurement than social sciences, which are highly interpretative. Theoretical physics borders on poetry sometimes. Not all aspects of psychology can be measured. History often relies on storytelling to join the dots between the various measurable data points. Evolutionary biology too. Measuring the mind remains elusive. Then there is the hard problem of consciousness: the measuring of the inner experience, of subjectivity. And where do we locate ‘concepts’ such as infinity, which helps us solve scientific problems but are essentially imaginary.

 

Just as infinity is not measurable, but helps in measurement, ideas such as justice and God and equality are mythic, but they help shape society. In the 19th century myth meant false knowledge. But in the 21st century myth refers to believed knowledge, subjective truth, faith in an idea, an assumption that a community considers true enough to transmit over generations using stories, symbols and rituals. These myths are somebody’s truth unlike fiction which is nobody’s truth and fact that is everybody’s truth. I find this functional definition very useful and practical.

 

A scientist will consider homoeopathy to be unscientific. Even ayurveda’s scientific roots are challenged by many. Yet, for many people, homeopathy and ayurveda solves many problems. I suffered from allergic rhinitis and acidity for years. As a medical doctor, I relied heavily on antihistamines and antacids. But I knew they were not permanent cures. Then, I met a homeopath and an ayurvedic doctor, and in one year, I was cured of both conditions.  The hardnosed ‘evangelical’ scientist will call this placebo effect. The fact is I no longer suffer as I did. I know that this was not just psychology at work; I am still as stressed as I was then. But I cannot prove using measuring instruments how the process worked. Something worked. Where do we locate that experience? In truth, in falsehood, or somewhere in the realm of ‘we don’t know yet’. I think a good scientist will put it in the third basket, rather than being contemptuous and dismissive as missionaries once were of what they called false gods.

 

Knowledge is power. And custodians of knowledge are powerful. By turning all knowledge that cannot be measured or proven using scientific principles into falsehood, scientists risk turning into power-brokers. It strips them of humility and curiosity. It makes them fools in the eyes of people who are smart enough to differentiate between evidence-based knowledge and faith-based knowledge. It makes them villains in the eyes of not-so-smart people whose self-worth and self-esteem is dependent on faith-based knowledge. These not-so-smart people yearn for dignity that the scientist refuses to give, behaving almost like a witness-martyr of Abrahamic mythologies, who refuses to make room for false gods, and are willing to die for their beliefs. In the resulting power struggle and violence, we find the most unscientific of papers violently intruding into Science Congresses by the strength of political power.

 

This has nothing to do with the pursuit of science. This has everything to do with the importance of power in society, and the wisdom of knowing how science plays a key role in arbitrating and regulating power in society, through various economic and political channels. Insurance companies for example rely on scientific data for their actuary process. Catholic Church uses its power to deny people access to scientifically proven treatments. Government recognition is denied to healing arts that are ancient, popular, traditional, but fail to fit into the scientific paradigm. We cannot deny that science does not exist in a vacuum. It is an ingredient of society where myth, hence faith, plays an important role.

 

Trained in medicine, I knew the power of science. I also knew the power of faith and suggestion. I knew how little control doctors actually have on the  healing process despite the best of medicines. I knew how crucial end-of-life decisions had so little to do with science, and so much to do with social norms, and personal belief in gods and demons and afterlife. I realise the place of both modern evidence-based medicine and mythology in people’s lives. I also realise the place of many other forms of healing that are based on faith, on non-measurable, even unscientific, principles which work for some people, but not for others. I realise in a diverse world, there will never be one God, one ideology, one ‘science’ or one myth.

 

The binary of true/false gods that comes to us from Abrahamic mythologies, that thrived in Europe and now thrives in America, dominates the world of science. It turns scientists into some kind of keepers of a ‘true’ religion who have to constantly fight the monsters of false ideas. This combative approach is totally unsuitable for science. It strips science of wisdom. Instead I feel, we should encourage scientists to adopt the Indian mythic paradigm of limited knowledge (mithya) versus limitless knowledge (satya), which is far more scientific. It acknowledges that no one can know everything in this world that is dynamic and diverse. Our knowledge based on the scientific method, or based on faith-based methods, will always be limited.

 

Devdutt Pattanaik is trained in medicine, and has authored 30 books and 600 columns on mythology and its relevance in modern times.

Rethinking the Social Contract of Science

The presentation looks at the extant and historical perspectives and institutional mechanisms regarding Skills and related livelihoods in India. The author points to the rigid and hierarchical separation between Skilling and Education programmes and institutions, and argues that this reflects the caste-based “firewall” in Indian society demarcating education/intellectual work from skills/manual work, and prevents the development of capabilities necessary for workers in a modern, industrialized economy, and is thus a major factor holding India back from achieving its potential. The author argues for breaching this firewall through an integrated education-cum-skilling system that develops knowledge along with skills, and encourages lateral transitions to enable capacity upgradation throughout one’s career. Inculcation of a “technological temper” and promotion of the dignity of labour are also required to break down culturally embedded prejudices.

 

D. Raghunandan is currently Director, Centre for Technology & Development. He is also with the Delhi Science Forum and has also been President of the All India Peoples Science Network. Over the last three decades and more he has been closely associated with social movements of science and technology. Find out more about him here.

This presentation was a part of an event organized at Jawaharlal Nehru University, New Delhi, on 27-Oct-2017.