There is a common theme in the popular view of religion(s) that they can be very different from each other. That religions founded in different eras and within different cultures make radically different demands on their adherents, shaping them to be well-nigh aliens to each other. This essentialist view of religion is not foreign to us brown folk as similar arguments were posited in the run up to the Partition of the subcontinent. However, this is not a peculiarly Indian trope either but quite widespread – indeed it remains a pet talking point of various modern movements including European and Hindu Right, Islamism, anti-religious new-Atheism among others. In all those cases I believe this essentialist difference of religions is quite overplayed and is often a function of what people would like to believe than what actually exists in practice.
In the disagreement over the relative strengths of deterministic and idiosyncratic factors in human observance of religion, I actually lean more towards the position that human cognitive biases make different religions look/feel more similar than the straitjackets they were meant to be. Like all other organic ideas held in our brains religions can’t be specially plead for in terms of their immunity from innately human cognitive biases. And a sufficient accumulation of cognitive biases drives religious practice into some longer term steady-state that is no longer representative of the initial design template by the founders or reformers. I think that non-essentialist view of religion is sensible, and it also explains why other overtly non-religious (or anti-religious) movements also suffer the same incremental and inexorable march to hotch-potch.
So, the logical question is what constitutes an epistemic standard of truth? In simple terms, why should humans even bother with any philosophy (religious or otherwise) at all when they will all go to the same mush eventually? An more importantly, do scientific theories turn to mush too if scientists are left to their own devices for a sufficiently long period of time?
The answer to my mind lies in fallibilism, which is broadly what the philosophy of science can be categorised as. Roughly defined, fallibilism is the principle that admittance (and measurement of) error is the only way to be objective. Karl Popper was the major 20th century philosopher who formalised much of the thinking around this field, especially with regards to the application to the scientific method.
At this point allow me to indulge in a little digression to disambiguate a common misconception. It is an on-going misunderstanding by many people, esp ones trained in physics, that Popper was a falsificationist and that falsifiability is like some epistemic gold standard of knowledge. This is false. Unfortunately many scientists take the argument of falsifiability etc to heart and can often appear rather zealous about applying it (cf Scientism). So if a theory is not falsifiable it is immediately relegated to the dustbin, as opposed to using falsifiability as a conjectural thumb-rule amongst many. A helpful indicator but not a necessary condition of physically useful or true models. In fact, it is often unclear on how one even falsifies a theory, because the experiment to do it may itself require production of new knowledge that is not known to the reviewer a priori. Coming up with a useful experiment or gedanken-experiment to even allow falsifiability checks is a creative act in itself. And the prescriptivist reviewer may be too keen to outsource thinking to the thumb-rule than expend that creative effort. In short, (apparent) lack of falsifiability isn’t the slam-dunk it is made out to be.
The institution of science is fallibilist. It recognises error, and not just recognises it but makes it the very fuel of scientific growth by incentivising all of the below:
- error creation: coming up with new hypotheses including “wrong-looking” ones – nothing is off the table at this stage,
- error estimation: experimentation to find bounds on measurement error and tolerances for what constitutes “good” measurements of physical phenomena. No experimental paper ever gets through review stage without error bounds on measurements.
- error correction: institution of peer review/criticism to check causal explanations (here come thumb-rules like Occam’s Razor, falsifiability etc), logical consistency with assumptions, predictions of experiments within tolerance, death of venerable old scientists (very occassionally) etc
The institution of science is therefore built around the centrality of error. That insight – lack of error is worse than error and all errors are mitigable through conjecture and criticism – is the foundational assumption which makes it fundamentally different from traditional religious systems. Claims of error-free people, sources of knowledge or inference (or institutions built around such claims) are by definition non-fallibilist and therefore non-scientific. There is no iterative error correction possible in such claims, yet people who uphold such beliefs and institutions will accumulate error up to a certain tolerance level. Beyond that there’s revolution (often accompanied by violence) and the march to stasis starts afresh.
In this science is essentially a non-human thing to do. We are not naturally drawn to it, but have to train ourselves for becoming members of the tribe and constantly second-guess ourselves. Behave in ways that – if applied to normal human relationships – will look positively neurotic or repetitive. This has often provided fodder for comedic portrayals of scientist-stereotypes, though I think that’s a little unfair because most such people don’t bring work home. Yet despite all that scientists occasionally do take themselves (or their fame) too seriously, believe things because of their prior notions than testing afresh and are prone to academic fashions and foibles and herd-behaviour. And that’s why science topics can on rare occasions progress only when venerable old scientists die, i.e. we have to resort to natural “violence”. So the rules of the game are rigged in just the right manner to ensure prevention of long-term stasis unlike religion.
Yet to my mind this is not the whole story. It is possible that while religious claims etc are indeed non-scientific, they can indirectly stimulate the error-creation step of the process. Religion’s kernel is a product of human creativity and can act as fodder for thought – religious experiences inspiring bouts of creativity, religious/moral constraints forcing people to think deeply about a problem, or even the more common notion of wonder at the deep regularity of the universe and doing science to know the mind of “god” as it were. At an individual’s level therefore religiosity (like art, music etc) can inspire creativity, while impeding error-correction at the institutional level due to dogma, utopian thinking etc. Therefore a key social tension esp in Western societies has always been how to have just enough religion (or spirituality) to help foster creativity and not let it get too powerful institutionally.
In this regard the new-Atheist movement is counterproductive to scientific growth because the key goal of this enterprise is to summarily extinguish a major cultural feature of humanity that has fostered a lot of creativity (when kept in control). It is a bit like preaching against jumping-off-cliffs, an activity that can/has lead to deaths but can be made very safe – indeed enjoyable and exhilarating – by using parachutes and other means of controlling descent. Context-free aversion to religion just like a context-free aversion to jumping-off-cliffs limits human creativity and is ironically against the core philosophy of science.