Belief, from core beliefs to mundane ones, is the centre of human psychology. That is, each other major aspect of human psychology – perception, reasoning, imagination, emotions, action, language and memory – revolves around belief. However, the current understanding of how beliefs form, and the nature of belief, is wrong – and not just a bit wrong, but the-Sun-goes-around-Earth wrong.
And just as the truth about the Sun and Earth’s relationship revolutionised cosmology, the truth about the psychology of belief will revolutionise psychology, and have equally profound implications for every aspect of our lives. However, this truth is so counterintuitive that it can initially seem unbelievable, as first did the idea that Earth isn’t the stationary centre of the universe but sweeps around the Sun while spinning about an axis.
How beliefs form
A belief, however fundamental or mundane, involves a claim. For example, we may believe that killing is wrong or that there's no milk in the fridge. And a claim enters our mind via one of four mental processes:
comprehension – when the claim is communicated to us via some medium that we perceive, which is normally voice or text
reasoning – when we generate the claim using logic, whether the logic is good or bad
imagination – when we generate the claim without using logic
However, the idea that Earth sweeps around the Sun while also spinning about an axis – heliocentrism – initially seemed implausible mainly because it initially seemed obviously contrary to experience. That is, we can't feel such movement, and it was wrongly assumed that we should be able to. Consequently, the idea that Earth is the stationary centre of the universe with the rest of the universe revolving around it – geocentrism – remained the consensus for around twenty centuries after the earliest known heliocentric theory was proposed, by the ancient-Greek astronomer Aristarchus of Samos.
And just as it was shown that, upon closer analysis, heliocentrism actually isn't contrary to experience, so it can be shown that, upon closer analysis, this counterintuitive theory of belief formation actually isn’t contrary to experience. It also can be shown that, upon closer analysis, the theory actually isn't contrary to the concept of comprehending, imagining-up or recalling a claim, and the current conception of reasoning is actually wrong. Also as with heliocentrism, the theoretical and practical implications of this theory are profound.
And not only is this how beliefs form, this is so not because of the nature of the human brain, but simply by logical necessity – a second counterintuitive aspect of this theory. The central tenet of the field of psychology today is that how the human mind works is solely due to the wiring and chemistry of the human brain, which is in turn determined by a person’s genes and experiences. Therefore all mainstream theories about any aspect of the functioning of the human mind involve that aspect being due, in some way, to the nature of the human brain. However, although our capacity for belief is obviously due to the nature of the human brain – rocks can't hold beliefs – there isn’t anything about the human brain which causes our extreme credulity. Instead, that extreme credulity is simply a logically inherent, and therefore inevitable, feature of a claim entering a mind, for any form of intelligence. Also, there’s a surprisingly basic logical flaw in the idea that we believe something because we’ve assessed that it’s true, even though this idea can seem true by definition and therefore unimpeachable.
The idea that the mere entrance of a claim into our mind causes us to believe it isn’t new. The prominent 17th-century philosopher Baruch Spinoza proposed it in his 1677 book Ethics. However, to support his theory he used an obscure philosophical argument, and several modern philosophers have independently concluded that it doesn’t support its conclusion. This partly explains why belief of Spinoza's counterintuitive theory has, ironically, been negligible since it was published. Even awareness of the theory is low today, even among researchers in psychology and philosophy, and interest in it is lower still.
Spinoza and the title page of his book Ethics
The most prominent modern academic writing on Spinoza’s theory is a 1991 paper called ‘How Mental Systems Believe’, which catalogues observational evidence, including experimental research, which seems to support the theory. It was written by the Harvard psychology professor Daniel Gilbert, one of the rare academics who subscribe to the theory.
The paper also presents a speculative alternative theoretical argument for the theory, but the argument involves belief formation working this way for an evolutionary reason, rather than logical necessity, and has several other flaws.
The nature of belief
The current understanding of the psychology of belief is badly wrong in a second way. It can also seem obvious from experience that we have different degrees, or strengths, of belief. For example, it can seem obvious that our belief that we’re eating an apple is stronger than our belief that it’ll rain tomorrow. However, the counterintuitive truth about the nature of belief is that we don’t have different degrees, or strengths, of belief, because all belief is certainty. As with Spinoza’s theory of belief formation, closer analysis reveals that this theory actually isn’t contrary to experience. And also as with Spinoza’s theory, the theoretical and practical implications of this theory are profound.
Also as with Spinoza’s theory, the certainty of belief isn't due to the nature of the human brain, but simply logical necessity – a second counterintuitive aspect of this theory. Although our capacity for belief is obviously due to the nature of the human brain, there isn’t anything about the human brain which prevents different degrees of belief. Instead, the certainty of belief is simply a logically inherent, and therefore inevitable, feature of belief, for any form of intelligence. That is, different degrees of belief are actually logically impossible.
And also as with Spinoza’s theory, the idea that there aren't different degrees of belief isn’t new. However, also as with Spinoza’s theory, previous theoretical arguments for it have issues. And also as with Spinoza’s theory, this partly explains why awareness of this counterintuitive theory is low even among researchers in psychology and philosophy, and interest in it is even lower, and support for it is lower still. Regarding interest, there doesn’t seem to have been any experimental research into the theory, unlike with Spinoza’s theory.
The truth about the psychology of belief isn't just counterintuitive, but is revolutionary, because belief is the centre of human psychology. That is, each other major aspect of human psychology revolves around belief:
The function of perception in human psychology is to enable us to form beliefs about the nature of, and events in, our surroundings via the information collected by our sense organs. Also, our current beliefs influence the processing of that information.
The only exception is the perceptual process which leads to a reflex action, such as when we touch something hot and our hand involuntarily jerks away from it. In such cases the signal from a sense organ directly triggers muscle activity before it has reached our cognitive processes, and therefore before any beliefs have formed via that sensory information. The function of such perceptual processes therefore isn't to enable belief formation, and is instead to cause the body to respond automatically and rapidly to a potential source of harm. However, even in such cases the sensory information also leads to belief formation when it reaches our cognitive processes, after the reflex action has begun.
Reasoning is the formation of beliefs using logic, whether the logic is good or bad. Also, our reasoning is based on our current beliefs.
The only exception is hypothetical, 'What if...', reasoning, which involves premises that we either disbelieve or are uncertain about. However, even such reasoning also involves premises that we do believe. And the conclusion of such reasoning is a belief – the belief that claim X is a logical implication of the premises.
Judgements and decisions, which are the product of reasoning, are beliefs. The product of any kind of reasoning is by definition a conclusion, and a conclusion is by definition a belief. That is, to conclude X is to believe X. A decision can be to do or not do something, which might not seem like a belief. However, such a decision is the conclusion, and therefore belief, that we'll do or not do that thing.
The function of imagination in human psychology is to aid our reasoning, and thus belief formation, by enabling us to generate possibilities in our mind for analysis. Also, our current beliefs influence the creation of those possibilities.
A deliberate, as opposed to reflex, action is by definition the product of a decision about how to act. And, again, a decision is a belief formed using logic based on our current beliefs. Decisions about how to act are based on our short-term and long-term goals – which are themselves decisions and therefore indeed beliefs – and on our beliefs about the world – including our own bodies – and about what effects we should try to have on the world through action in order to achieve our goals, and what particular actions are likely to have those effects. Our goals can range from opening a door to the fulfillment of life ambitions, and can involve things occurring from several seconds to several years in the future, and even after our death.
The function of the emotions is to ultimately cause us to act in particular ways. Each emotion evolved because it tended, in our evolutionary past, to ultimately cause us to act in a way that was beneficial, however indirectly, to reproductive chances. But, again, action revolves around belief. Therefore, in human psychology, the fulfillment of an emotion's function revolves around belief. However, given the speediness of the brain, which works on the timescale of milliseconds, our experience of an emotion can so quickly lead to a particular action that the action can seem to be a direct product of the emotion.
Also, whereas some emotions are automatically triggered by our bodily states – such as the emotions of hunger, physical pain, sexual pleasure and the pleasure of eating – most emotions are aroused by our beliefs. Specifically, our beliefs about ourselves and about the world, including other people and the content of their minds, and the world of the past and the possible future. Even emotions that are aroused via our perceptions are aroused by our beliefs about the world that we form on the basis of those perceptions – although, the speediness of the brain means that such emotions can follow our perceptions so quickly that they can seem to be directly aroused by those perceptions. And even emotions that are automatically triggered by our bodily states are often accompanied by emotions aroused by our beliefs about such states. For example, the pleasure inherent in eating tasty food can be accompanied by the emotion of relief that's aroused by our belief that we're satisfying our hunger.
The psychology of language can be divided into language use – which normally involves speaking or writing – and language comprehension – which normally involves listening to speech or reading.
Speaking and writing are acts, and, again, action revolves around belief. Also, the immediate aim of language use is to cause our audience to form beliefs based on the content of our speech or writing, whether or not we hold such beliefs ourselves. And such beliefs include beliefs about what are our own beliefs.
The aim of language comprehension is to form beliefs about the content of some speech or writing. And such beliefs can then be used in the formation of further beliefs, including beliefs about what are the speaker’s or writer's own beliefs.
Strangely, there doesn’t seem to be an awareness within academia that belief is the centre of human psychology. No psychology book or paper that I’ve read refers to this fact, including even those specifically on belief. For example, the psychology professor James Alcock, who’s an expert on belief, doesn’t refer to it anywhere in his 638-page 2018 book on belief, Belief. And the cognitive scientists Nicolas Porot and Eric Mandelbaum don't refer to it in their 2020 paper 'The science of belief: A progress report'. They do refer to 'the centrality of belief to cognitive science'. Cognitive science is the interdisciplinary study of the mind, which combines theories in psychology, philosophy, artificial intelligence, neuroscience, linguistics, anthropology, and other fields. However, again, the claim that belief is central to the workings of the human mind isn't as strong as the claim that it's the centre of those workings. Indeed, Porot and Mandelbaum never claim that belief is uniquely important, but write: 'As belief arises in so many areas of cognitive science, it deserves pride of place alongside such venerable stalwart concepts as memory, attention, perception, and mental representation'. Also, the claim that belief is central to this study of the workings of the mind actually isn't strictly even the claim that belief is central to those workings themselves, given that the former claim is merely about this field of study rather than about the subject of the field. The only reference that I've found to belief being the centre of human psychology is in a 1921 lecture by the philosopher Bertrand Russell, who said: ‘Belief … is the central problem in the analysis of mind. … Psychology, theory of knowledge and metaphysics revolve about belief …’.
The book and paper on belief that I mentioned in the previous paragraph show that there's some recognition within academia that belief is a distinct topic of study within the field of psychology. And yet, in addition to the unawareness that belief is the centre of human psychology, the study of belief currently isn’t actually even an established subfield of academic psychology, unlike the study of perception, reasoning, emotion, language, memory, and many other areas, including more niche areas such as music psychology. For each established subfield there are multiple dedicated textbooks available, but there’s no dedicated textbook on the psychology of belief. Even textbooks covering psychology in general, that I checked, don't have dedicated chapters, or even dedicated chapter sections, on belief, and the term belief often doesn't even appear in the index.
Given that belief is the centre of human psychology, a fundamental change in our understanding of the psychology of belief will inevitably have profound implications across the whole field of psychology. That is, it will constitute a paradigm shift that will revolutionise the field. And this will in turn have equally profound implications for every aspect of our lives. And the acceptance that the mere entrance of a claim into our mind, whether via comprehension, reasoning, imagination or recollection, causes us to believe it, and that there aren't different degrees of belief, because all belief is certainty, would clearly constitute two fundamental changes in our understanding of the psychology of belief. Therefore these two theories are radical not just in the sense of their content, but also in the sense of their potential to revolutionise a whole field of science, and thereby have an equally dramatic impact on every aspect of our lives.
Academia’s open-mindedness deficiency
As I mentioned, previous theoretical arguments for these two counterintuitive theories about the psychology of belief have issues. And this past absence of good theoretical arguments partly explains why even awareness of these theories is low today, even among researchers in psychology and philosophy, and interest in them is even lower, and support for them is lower still. However, the past absence of good theoretical arguments is ultimately much more an effect than a cause of the poor status of these theories within academia. That is, the more fundamental cause of their poor status is the combination of their counterintuitive nature and insufficient open-mindedness within academia. This has led most academics who’ve encountered them to not take them seriously, which has in turn resulted in academia not giving them sufficient attention to develop good arguments for them.
It’s of course an academic’s job, as they seek to understand reality, to be open-minded, given that it’s always possible that reality differs from our current beliefs. Indeed, the history of science is full of examples of initially counterintuitive ideas eventually being vindicated, although often after being rejected by most scientists in the relevant field, and often with scorn. As someone once said, 'That which seems the height of absurdity in one generation often becomes the height of wisdom in another'.
The tendency to immediately reject counterintuitive ideas or data is called the Semmelweis reflex, in honour of the doctor and researcher Ignaz Semmelweis. In 1847 Semmelweis showed that the mortality rates for women in maternity wards was dramatically reduced by doctors washing their hands with disinfectant before performing each delivery. But his finding pre-dated the germ theory of disease, and the medical community rejected it for many years – at the cost of many more lives – and ridiculed Semmelweis, partly because it seemed implausible that the hands of a gentleman doctor could transmit disease.
Perhaps the most dramatic example of reality differing from our beliefs is our past understanding of the cosmos. We once believed that Earth is flat, and then discovered that it’s actually round. Then we discovered that Earth isn’t, as thought, the stationary centre of the universe, with the rest of the universe revolving around it, but orbits the Sun while rotating about an axis. Then we discovered that even the Sun isn’t, as thought, the centre of the universe, but is just an ordinary star that's one of the myriad which make-up the rotating whirlpool-shaped swarm of stars that we call the Milky Way galaxy. Then we discovered that the Milky Way isn’t even, as thought, the whole universe, because there are other galaxies. Today we know that the Milky Way consists of over 100,000,000,000 stars, and is part of a group of at least 54 galaxies, which is part of a supercluster of around 100,000 galaxies, and that there are at least 10,000,000 galactic superclusters in the universe.
This photograph can look like an image of a starfield, but it isn’t. This is the Hubble Ultra-Deep Field image, and almost every object in it – even the tiny points of light – is actually an entire galaxy, each of which contains between a few hundred million and a few trillion stars.
Of course, academics shouldn’t automatically consider any theory credible. As is said, ‘Keep an open mind, but not so open that your brain falls out’. But academics equally shouldn’t, as often happens, immediately reject a counterintuitive theory simply because it’s counterintuitive. Most academics would probably protest that they wouldn’t do that themselves because they’re well aware of the importance of open-mindedness in their work, and of past cases of counterintuitive theories that were initially widely rejected but then vindicated. However, an awareness of the importance of open-mindedness isn’t itself open-mindedness, contrary to what can be assumed. That is, even with such an awareness we may still, upon encountering a specific counterintuitive theory, be too quick to think ‘I’m open-minded, but that’s so obviously false that it’s not worth serious consideration’.
Ironically, believing in the importance of open-mindedness can actually reduce our open-mindedness, because it can enable us to assume that our rejection of a counterintuitive claim can’t be due to a lack of open-mindedness. One version of this is that someone who regards themselves as being committed to reason and science, and therefore to open-mindedly following logic and evidence wherever it leads, can be too quick to assume that a counterintuitive scientific theory is the product of an insufficient commitment to reason and science. Similarly, our exercise of a small degree of open-mindedness towards a counterintuitive claim can, ironically, prevent greater open-mindedness, because it can enable us to assume that we’ve been sufficiently open-minded. Also, our memory of past instances of being open-minded can, even if accurate, lead us to wrongly assume that we're open-minded by nature, and therefore always open-minded.
Also, one of the many universal biases in human cognition is illusory superiority, which is our cognitive bias towards thinking that we're above average in some quality or ability. We're therefore biased towards thinking that we're more open-minded towards counterintuitive claims than the average person. And even if we’re aware of this bias we’re then biased towards thinking that we're more able to overcome it than the average person.
Also, we may assume that our rejection of a counterintuitive claim is due not to it being counterintuitive, and therefore to a lack of open-mindedness, but to us having a specific reason to reject it. However, whenever anyone considers a claim to be counterintuitive they do so for at least one specific reason. Therefore having a specific reason to reject a counterintuitive claim doesn't rule-out insufficient open-mindedness, because we could be insufficiently open-minded about the possibility that our reason is invalid
Also, another universal bias in human cognition is confirmation bias, which is our cognitive bias towards interpreting, seeking and recalling information in a way that confirms, or at least supports, what we currently believe. Confirmation bias therefore reduces our open-mindedness towards counterintuitive claims, including making us biased towards interpreting such claims as false.
Ironically, confirmation bias not only contributes to our closed-mindedness towards a counterintuitive claim, but creates a bias towards concluding that someone else's belief of the claim must involve closed-mindedness, given their apparent inability to see the obvious falseness of their belief. And the more counterintuitive the claim is, and therefore more distant from the truth it seems, the more closed-minded the other person can seem. And this judgement further reduces the probability that we'll notice our own closed-mindedness. Also, even if we consider the fact that we're just as confident in an alternative claim as the other person is in the counterintuitive claim, we'll be biased towards interpreting our confidence as being justified rather than the product of closed-mindedness. Also, if we accuse the person of being closed-minded, and they dispute this, we'll be biased towards interpreting their response as being due to their closed-mindedness preventing them from seeing their closed-mindedness.
Also, another universal bias in human cognition is availability bias, which is our cognitive bias towards making a judgement on the basis of whichever relevant information happens to be most readily available to our mind, instead of suspending judgement until we’ve checked for other relevant information. In short, it's our bias towards jumping to conclusions. Given that what we currently believe regarding the subject of a counterintuitive claim is obviously readily available to our mind, availability bias means that we’re biased towards jumping to the conclusion that the counterintuitive claim is false.
Also, an obvious implication of a counterintuitive claim is that our current belief regarding the subject of the claim is wrong – and the more counterintuitive the claim the more dramatically wrong we are. Conversely, an obvious implication of our current belief is that those who believe the counterintuitive claim are wrong, and that we’ve succeeded in recognising the falseness of the claim – and the more counterintuitive the claim the more dramatically wrong the believers of it are. Therefore a counterintuitive claim is simultaneously a threat and potential boost to our self-image and reputation – and the more counterintuitive the claim the greater this threat and potential boost. And this can result in a bias, out of self-interest, towards dismissing and even attacking the claim – and the more counterintuitive the claim the stronger this bias. And this is especially so for an academic if the claim concerns their area of expertise – especially given that academics publicly express their beliefs relating to their research area to considerable numbers of people, in lectures, conference presentations, and writings.
Also, a counterintuitive claim can threaten to invalidate an academic’s line of research, or at least a significant part of it – and the more counterintuitive the claim the greater the potential degree of invalidation, and so the greater this threat. Therefore such a claim can both be a threat to their funding and imply that they’ve wasted time and effort on their line of research, which they may have been following for years and even decades. And both of these considerations lead to a bias, out of self-interest, towards dismissing and even attacking the claim – and the more counterintuitive the claim the stronger this bias.
Also, most fields of research affect, directly or indirectly, people's lives. Therefore a counterintuitive claim can imply that a current consensus in a particular field of research is in some way bad for at least some people – and the more counterintuitive the claim the worse for people the consensus could be. Therefore a counterintuitive claim can be a threat to an academic’s self-image and reputation in this sense too, which leads to a bias, out of self-interest, towards dismissing and even attacking the claim – and the more counterintuitive the claim the stronger this bias.
Also, accepting an initially counterintuitive claim creates a need to work through its implications for all of our related beliefs – and the more initially counterintuitive the claim the greater this need is, and the more of our related beliefs will likely be affected. And, for researchers, accepting an initially counterintuitive theory can require learning new concepts and a new way of thinking – and the more counterintuitive the theory the more likely this is. And a desire to avoid these forms of mental effort leads to a bias, out of self-interest, towards dismissing and even attacking the claim – and the more counterintuitive the claim the stronger this bias.
Given all of the above points, academia inevitably suffers to some degree from an open-mindedness deficiency. Of course, individual academics vary greatly in their open-mindedness. And the inevitability of this deficiency doesn't mean it can't be reduced. For example, simply increasing awareness of the above points would probably at least somewhat increase the open-mindedness of at least some academics.
A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.
This point has been shortened to ‘Science advances one funeral at a time’. One study found that the vitality of a scientific subfield actually increases upon the death of an eminent scientist who was still active in that subfield. Another study found that ‘… highly novel papers … deliver high gains to science: they are more likely to be a top 1% highly cited paper in the long run, to inspire follow-on highly cited research, and to be cited in a broader set of disciplines’. But the study also found ‘… strong evidence of delayed recognition of novel papers and that novel papers are less likely to be top cited when using a short time window’ and that ‘… novel papers typically are published in journals with a lower than expected impact factor [a measure of how often a journal’s articles are cited]’. The paper points-out that being little-cited, and published in journals with low impact factors, can also have a negative effect on a researcher’s future funding prospects.
This frustration of scientific progress by scientists themselves, through insufficient open-mindedness, has frustrated humanity’s progress. A tragic example of this is our failure to find a cure or even a disease-slowing treatment for Alzheimer’s, despite decades of research. As the science journalist Sharon Begley reported:
The brain, Alzheimer’s researchers patiently explain, is hard – harder than the heart, harder even than cancer. While that may be true, it is increasingly apparent that there is another, more disturbing reason for the tragic lack of progress: The most influential researchers have long believed so dogmatically in one theory of Alzheimer’s that they systematically thwarted alternative approaches. Several scientists described those who controlled the Alzheimer’s agenda as “a cabal.”
In more than two dozen interviews, scientists whose ideas fell outside the dogma recounted how, for decades, believers in the dominant hypothesis suppressed research on alternative ideas: They influenced what studies got published in top journals, which scientists got funded, who got tenure [an indefinite academic post], and who got speaking slots at reputation-buffing scientific conferences.
A symptom of academia’s open-mindedness deficiency is the usage of the terms fringe research, fringe researcher and fringe theory within academia today. Fringe research, which questions things that are considered facts by the great majority of mainstream researchers, can only benefit mainstream research. If such research tries and fails to challenge an apparent fact then that outcome further strengthens the case for the apparent fact. And if the challenge succeeds then that outcome both advances our knowledge and saves mainstream researchers from wasting further time on research that’s premised on a false belief. Indeed, many mainstream theories today began as fringe theories – such as the idea that Earth goes around the Sun – with the preceding, and once highly-regarded, mainstream theory itself becoming a fringe theory – such as the idea that the Sun goes around Earth. And yet the terms fringe research, fringe researcher and fringe theory are derogatory within academia today, because its open-mindedness deficiency leads to a generally disdainful attitude towards such research.
The two counterintuitive theories about the psychology of belief qualify as fringe theories given that they deny things that are considered facts by the great majority of mainstream researchers today: that the mere entrance of a claim into our mind doesn't cause us to believe it, and that there are degrees of belief. However, my theoretical research vindicates them. Although, the conclusion that belief works this way simply by logical necessity actually contributes to their fringe status, given that the central tenet of the field of psychology today is that how the human mind works is solely due to the nature of the human brain.
I conducted this research while employed by Edinburgh University. However, I wasn't an academic – I don’t even have a university degree, having dropped-out of the astrophysics degree that I began after leaving school. I was working in the university’s main library as a book shelver. I left that post in 2016 in order to work full-time on researching the implications of the two theories, while burning through some money left to me by my parents, the unwitting patrons of my research.
The combination of the fringe status of my research and my lack of an academic position and my lack of even academic qualifications or connections and the fact that my research is self-funded makes me a sort of uber fringe researcher. However, I still managed to get my initial research published in an academic journal – sort of. My paper ‘How Belief Works‘ was published in September 2013 in the philosophy journal Think, a publication of the Royal Institute of Philosophy, which is based in the UK. However, although Think is officially an academic journal, published by Cambridge University Press, and looks like an academic journal, it's actually more of a philosophy magazine aimed at a general audience, given that contributions aren't required to contain original analysis, and can't presuppose any philosophical knowledge, and authors are discouraged from including references to other works. The current issue includes the article 'The Metaphysics of Farts', by a professional philosopher writing under the pseudonym Bill Capra. The article's summary states 'I consider the metaphysics of farts. I contrast the essential-bum-origin view with a phenomenological view, and I argue in favour of the latter'. The journal's impact factor seems to not have been determined, but I imagine it's very low. However, the nature and low status of Think at least meant that it was a publication that plausibly would accept a paper presenting my fringe research – which I assume was also 'Bill Capra''s thinking concerning their fart research.
As publication approached, I wondered what impact my paper would have, occasionally fantasising about it making a splash. My email address was to be printed at the end of the paper, and I wondered how many emails I’d receive. Over eight years later, I’ve yet to receive any. The paper has been cited three times, by two other papers and a PhD thesis, and each only refers to it briefly. And they’ve together only been cited eight times themselves. So the 'splash' made by my paper turned-out to be of the order of that made by a raindrop hitting the surface of an ocean. Also, none of my attempts to generate interest in the paper by contacting academics directly have been successful.
However, this didn’t diminish my desire to continue this line of research. But I've decided to instead present all of my research in this evolving online book aimed at a general audience. Given the much bigger potential audience, I've a much greater chance of reaching people who are sufficiently open-minded to take my research seriously. And if the two theories thereby gain credibility outside academia then that could lead to them being taken seriously within academia.
But before I present the theoretical arguments which vindicate the two theories I need to define belief.