The Conceited Folly of Certainty

post by Noah Blaff (Deep Dives) · 2020-07-28T19:56:26.888Z · LW · GW · 3 comments

Contents

  I.
  II.
  III.
  IV.
  V.
  VI.
None
3 comments

Overconfidence is strangling public discourse and going widely unnoticed.

“Blessed is the man, who having nothing to say, abstains from giving wordy evidence of the fact.” - George Eliot (AKA Mary Evans)

I.

Over two thousand years ago, there was a Greek painter named Apelles. He climbed to notoriety up educational rungs, leading him from Ephesus to Sicyon, studying under famous classicists Ephorus and Pamphilius, respectively. While history has failed to keep tabs on this enigmatic figure, his main legacy – his painting of Alexander the Great notwithstanding – comes from Naturalis Historia, an encyclopedic tome written by Pliny the Elder.

Pliny recalls the story of a cobbler critiquing a poorly drawn shoe in one of Apelles’ paintings. Motivated by perfection, Apelles went back that very night and made the corrections as advised. The next morning the cobbler noticed the changes, and, inspired by his impact on such a figure, began to openly criticize the leg Apelles had begun painting to some onlookers. Emerging from a hiding-place, Apelles famously responded, Ne sutor ultra crepidam: "Shoemaker, not beyond the shoe."

Two thousand years later, in 1819, essayist William Hazlet wrote an open letter to William Gifford, then editor of the Quarterly Review. In the letter he accused Gifford of being “an Ultra-Crepidarian critic,” drawing from the famous Greek dialogue above. Ultracrepidarianism is an obsolete, esoteric word, but with the recent waves of faux expertise online and at reputable media publications, it deserves more appreciation in the modern lexicon.

The term refers to critics that leap beyond their known arena, offering overzealous advice – an all too common occurrence. Suddenly, it seems society is filled with the voices of such ‘experts’ who possess solutions to all problems. A mother who understands the legal limits of free speech. The uncle who has discovered optimal taxation policy to minimize deadweight loss. And the university freshman who has usurped climate scientists as the authority figure on environmental policy. These examples seem contrived, but do they not ring with the tune of familiarity?

Walk into a social engagement, and you will likely hear heated conversations on pressing political issues. University students discussing racial equity and policy reform, parents delivering verbal polemics about the current leader. Politics has overtaken friendly discourse in most settings, it’s so much of what we discuss. This isn’t a negative development, though. In fact, it’s important to have an educated population – one that is willing to discuss such matters openly. On the other hand, it is important now more than ever to hold such solemn conversations to a higher standard as they have deep impacts.

The present danger lies in the certainty with which people believe they are right on every issue and vehemently oppose (often on a personal level) anyone who is philosophically misaligned.

Social media is ripe for heated political conversations that establish factions for and against every post and comment made. Given the amount of time spent (or wasted) on these platforms, there seems to be no moment of reprieve – arguing has become our default setting. Online, individuals value pride over ethics, external perception over truthfulness. People are willing to post misleading news and comment vitriolic ad hominem attacks in last-ditch efforts to win over the imagined ‘undecideds’ following a post.

There are some, however, who do post out of a genuine desire for change and constructive dialogue. They want to sand the rough edges of societal rhetoric and policy to make it better, more durable. Having conversations about policy decisions is important towards this end. Populations that accept the status quo are more likely – whether through malevolence or ignorance – to acquiesce in injustice.

Communal discussions have a compounding effect, bolstering concern for certain practices in society. A conversation between friends causes onlookers to ponder the same issues, and the chain reaction spreads rapidly throughout social cliques. These interactions, online and in-person, serve to inform citizens and develop a cultural ethos of what is right and wrong.


Capitalism is a system that implicitly rewards specialization, which has led to post-secondary education for over 80% of young adults. This has sparked growing gaps of knowledge between the generalists with BAs – or any bachelor’s degree – who read the New York Times every morning and pupils that have spent years rigidly studying a specific discipline.

A growing issue is the certainty with which societal views are communicated.

While everyone is entitled to an opinion and should not be judged due to lack of accreditation, certain people are more likely to understand nuanced issues. For example, even with a slight background in statistics and academic research, it might be hard for the average university graduate to fully comprehend discipline-specific research that is cited amid debate.

As will be discussed, experts tend to maintain a marginal advantage over laypeople in predictive abilities within their discipline. Their expertise, however, does allow for more thorough debate and understanding of certain issues.

Yet, average Joes and Jills openly opine with absolute certainty, pretending that every policy decision is not multi-faceted, having far-reaching impacts across society. During this pandemic, popular figures with no background in health sciences or epidemiology seem to have all the rights answers. The casual onlooker engages in virulent Facebook arguments about lockdown policies without regard for the impact this will leave on the most destitute, those who lack savings and economic prospects. On the other side of this argument, many neglect the death toll and strain on healthcare reopening might entail.

What a select few people – generally, moderates, progressives, and centrists – tend to acknowledge is that they can decide their opinion based on evidence they have encountered, but they aren’t sure that their opinion is the absolute best. As with all policies, the best one is usually a median of aggregated opinions – lying somewhere in the middle. These trustworthy individuals usually preface their opinions with, “I’m no expert, but …” or, “It’s impossible to tell, however, given what I’ve read …”

Yet again, that isn’t to say there shouldn’t be opinionated discussion about pressing issues. Rather, that anytime you hear or read someone’s opinion on complex issues based on the reading of one tendentious op-ed, consider Apelles and the shoemaker. Every cursory post under the guise of researched doctrine should remind readers: “Paint not beyond the shoe.” Know your limits.

II.

In 2018, three European researchers published a paper in the Journal of Experimental Psychology. Their goal was to identify the persuasive power of knowledge, testing a psychological concept known as the “confidence heuristic.” The research asked pairs of participants to identify a criminal based on facial recognition. The groups would jointly identify a perpetrator from a lineup of suspect photos the police had supplied. The two members were then given different rough “composites” (a picture vaguely resembling the perpetrator) and asked to collaborate in making the final decision of which suspect committed the crime.

Within every group, only one member was given a good composite (one that was clear enough to allow for some confidence in suggesting a suspect). The study showed that those with less descriptive composites trusted their partners more due to the confidence they possessed. In most pairs, this led to the correct selection of the suspect. The bottom line: these trials support evidence of the confidence heuristic, with the authors noting that, “confidence does signal accuracy and does encourage people to believe what is said.”

It would be a logical extension, therefore, to assume that people who confidently opine on politics are signalling their expertise and it would be wise to trust them. The researchers noted, however, that context matters. Their experiments were conducted within “common-interest” tasks, simple tasks in which the participants had a shared goal of correctly identifying the suspect. The popular example is a communal pub quiz containing a prize for the winning team. If one lacks confidence, their opinion is often disregarded. Hesitation is a negative signal, representing lack of knowledge on the subject.

“Beware, however,” one researcher notes, “of situations where people’s interests are misaligned!” For example, if a competing quiz teams confidently shouts out an answer, “you’d be foolish to blindly trust their judgment—no matter how confident it is.” The authors conclude by warning, “Hence, before believing their assertive statement that Ed Balls was the most recent winner of Strictly Come Dancing, you’d do well to consider the other person’s agenda.” (For any non-Brit readers, Ed Balls’ Gangnam Style, while quite impressive, did not earn the win.)

Such is the case when people are discussing political matters. Sadly, there is no alignment of interests and/or shared goals. As noted above, many care more about being the winner than being right – an important distinction. Therefore, the confidence heuristic that humans have relied on and trusted for millennia is nullified in such circumstances. In fact, as the author wittily remarks at the end of her commentary, you should be extremely wary of confidence when engaged with people who don’t share the same goal. They will often cut corners and try to win however they can.


But how can people’s interests be misaligned? Doesn’t everybody just want to solve the issues?

Unfortunately, not. People care much more about improving social standing through perception than reaching common ground. This is one of the most damaging norms in our social psyche. It’s so deeply embedded in debate, in fact, that few ever pause to consider it. Many enjoy the comfort of being told they are right, which leads them to check into the alluring echo chamber – where everybody knows your name and agrees with you on everything. We’ve all maintained residency here at some point – whether we were aware of it or not.

Education has conditioned us – to a certain extent – to frame life in terms of right and wrong. In America, the average citizen now receives almost 14 years of educational training in their lives. By the time they enter adulthood, neural pathways have formed which associate being right with positive outcomes – scholarships, admissions acceptances, etc. Which makes the case for misalignment quite clear.

Even beyond the institutional influences that impact societal perceptions of rightness, we have evolved with a base need to be right in times of existential threat. When we are on the defense, being assaulted or accused of wrong action, confidence – and the need to be right – helps us survive. Mel Schwartz , a psychotherapist, notes that “It quickens our pulse, causes us to shout … It is the raison d'etre for most acts of hatred, violence, and warfare.”

The need to be right is a visceral reaction to any threat. In an era of heightened political division and tension, this need often leads us astray from the true goal of intellectual honesty. Believe half of what you hear and none of what you think – in the end, we are all fallible humans that trick ourselves more often than we’d like to believe.

Researchers have highlighted extensively our constant bout with cognitive dissonance, which means that “we naturally look for evidence that validates what we already believe, which in turn makes us stronger in our convictions.” This bias is ubiquitous and is exclusively unnoticed. When you search the web for answers to a question, you are more likely to select links that provide the initial guise of opinion-confirmation. Even in writing this essay, I have unconsciously catered my search terms to align with existing views. (Noting this, I have gone back through all sections and tried to find disruptive evidence to incorporate into my writing.)

Thus, it is not surprising to see active social media users peppering comment sections with biased sources in an effort to portray confidence. Instinctively, this is done in the hopes of being regarded as right and gaining whatever social capital has been put into the pot. These forums are terrible for fostering honest and constructive discussions. That primal need kicks in when you think that every post or comment you publish might be seen by your entire social circle – hundreds of peers with the ability to judge you.

Dialogue needn’t be inefficient, however. Research shows conflict can be extremely constructive and bring two parties closer together when the environment is suitable for honest discussion. The following are some factors that contribute to meaningful debate and discussion:

1. Both parties must be willing to speak and hear others in a respectful manner. Good faith discussion has become an arcane relic of times past, the abacus of modern political discussion.

2. Discussions should also aim to preserve the underlying relationship. Coalitions are built amongst like-minded individuals and it is hard to build these strong connections when political discrepancies threaten to end the relationship’s foundation.

3. Attempts must be made to see the other’s perspective. If you fail to understand why people process factual events differently, then you will be unable to understand the conflict from any angle other than your own.

4. Discussions must be accommodating and seek ways to improve the behaviour of participants

5. The goal must be towards a better future state, acknowledging this progressive ideal is the underpinning of all historical development.

6. Participants need to exercise self-awareness, knowing – as mentioned above – that emotions surely impact their views and the solutions they might propose.

These all contribute to an over-arching theme: putting aside personal pride and bias while trying to conduct productive discussions. Overconfidence does not bode well for healthy debate. Certainty and openness to changes are inversely correlated – as you approach absolute certainty, you can no longer accept any significant probability of being wrong on an issue.

Since conversations are geared towards being right, the misalignment of interests between parties has increased exponentially. Therefore, when people exude confidence in every position, you should likely question the evolutionary signal that we are bred to perceive: that one’s confidence is signalling being right. People who admit a margin of error and address their uncertainty should be lauded and trusted more than those who ignorantly press the attack.

There is a Catch-22 in which common rhetoric states we should second-guess people who second-guess themselves. But research has supported the hypothesis that interrogating personal views through devil’s advocation supports analytical rigor in developing opinions. This all leads me to wonder: Did the shoemaker really care about Apelles making the best painting, or just about being admired by the public as a notable art critic?

III.

There are people who get a pass when they propose ideas with confidence: experts. The scientific method helps develop logical conclusions (and sometimes proposals) surrounding certain issues. But the issue of overconfidence occurs in academia just the same. Some of the most thoughtful and dedicated researchers acknowledge their cognitive biases in conducting researching. P-hacking is a common technique to justify inconclusive evidence. And the research system has shifted to favor quantity over quality, presenting researchers with the daunting threat of ‘publish or perish.’

But, for all the system’s flaws (an entire essay is in the works on this topic alone), researchers rely on observation and evidence when drawing conclusions. Thus, we find it significant in proving a point to cite direct observation, i.e. “When prisons added one additional staff member, the number of violent incidents decreased by X%.” This is a statement you would likely come across in the scientific community. What you would never see is, “A prison guard stated that one more member would help reduce violent incidents by X%.” This is an argumentative fallacy, focusing on the person over the data – an argument from authority.

Arguments from authority happen when the opinion of an expert on a topic is used as evidence to support an argument. It’s a well-known logical fallacy, although it has been a divisive topic. For example, in Introduction to Logic, a common introductory textbook for university freshmen, Copi and Cohen write:

“When we argue that a given conclusion is correct on the ground that an expert authority has come to that judgment, we commit no fallacy. Indeed, such recourse to authority is necessary for most of us on very many matters. Of course, an expert’s judgment constitutes no conclusive proof; experts disagree, and even in agreement they may err; but expert opinion surely is one reasonable way to support a conclusion.”

And in Logic, another popular course supplemental, Baronett says that:

“The appeal to expert testimony strengthens the probability that the conclusion is correct, as long as the opinion falls within the realm of the expert’s field.”

These excerpts disregard the fallacious ‘reasoning’ that explicitly supports cognitive bias – and this is what is commonly taught to students.


In 1923, a paper was published in the Journal of Experimental Zoology by an expert named Theophilius Painter. Painter declared in this report that humans have 24 pairs of chromosomes. This was a large development in the field and resulted in his receiving the 1934 Daniel Giraud Elliot Medal from the National Academy of Sciences.

Scientists propagated this fact without auditing Painter’s poor data and conflicting observations he had made in his initial paper – it came to be held as fact. Over thirty years later, in 1956, researchers Joe Tijo and Albert Levan published “The Chromosome Number of Man” in Hereditas – a scientific journal focused on genetics. In the paper, using more advanced techniques, the authors looked at the chromosomes in human somatic cells and found that the actual number was 23.

Yet for over three decades top-tier scientists cited the author instead of the (faulty) data he provided. Even textbooks showing microscopic pictures with 23 identifiable pairs reported the number as the oft-cited 24. Experts fell prey to confirmation bias as “most cytologists, expecting to detect Painter's number, virtually always did so.”

Painter’s influence was so great that scientists preferred to believe his count over the actual evidence, and researchers that obtained the accurate number (23), modified or discarded their data to agree with Painter. Carl Sagan was right when he cogently proposed that “One of the great commandments of science is, ‘Mistrust arguments from authority.’ ... Too many such arguments have proved too painfully wrong. Authorities must prove their contentions like everybody else.”

In 1989, Dr. Martin Fleischmann and Dr. Stanley Pons – electrochemists working at the University of Utah – announced that they had found a way to create nuclear fusion at room temperature. This phenomenon, called “Cold Fusion” would allow governments to produce immense nuclear reactors with much less capital and infrastructure requirements. Moti Mizrahi of St. John’s University in Queens, New York, proposes the following:

“Suppose, then, that, shortly after their announcement, a non-expert puts forward the following argument from expert opinion:

(1) Electrochemists Fleischmann and Pons say that nuclear fusion can occur at room temperature.

(2) Therefore, nuclear fusion can occur at room temperature.”

This line of thinking, used often enough, will lead mostly to false conclusions. There is no substantive reasoning, only optimistic folly. There will be, however, the odd occasion where such logic produces the correct result, but it will be the exception, not the norm.

Indeed, shortly after the initial publication, fellow researchers could not achieve the same result – it turns out that nuclear fusion is not currently possible at room temperature. This short vignette illustrates that “the mere fact that two electrochemists say that nuclear fusion can occur at room temperature is not a particularly strong reason to accept the claim that nuclear fusion can occur at room temperature.”

But isn’t that just one example? And am I not falling prey to the same fallacy, citing an authority and limited data to imply empirical evidence? Good observation! Let’s dig further.

In 2005, Phillip Tetlock – the current Annenberg University Professor at the University of Pennsylvania – conducted a long-term study analyzing numerous political predictions made by ‘experts’ (academics, economists, policy makers, etc.). His results show that these experts were only slightly (and extremely insignificantly) more accurate than chance (50%). This is to say that the ‘experts’ may as well have been guessing. Or as Tetlock himself posits, most of the experts he studied did no better than “a dart-throwing chimpanzee.”

Research seems to confirm this hypothesis time and again. In 2010, David Freedman published “Wrong: Why experts* keep failing us – and how to know when not to trust them.” The findings confirm Tetlock’s view and provide some startling observations on expert opinion:

1. Approximately two-thirds of the findings published in top medical journals are rejected after a few years;

2. There is a 1 in 12 chance that a physician’s diagnosis will be wrong to the extent that it could cause significant harm to the patient;

3. Most studies published in economics journals are rejected after a few years (i.e., the results of the studies are subsequently considered to be incorrect);

4. Tax returns prepared by professionals are more likely to contain errors than tax returns prepared by nonprofessionals

Other research finds that clinical psychologists perform no better (judged by outcome) than non-experts. Professors Colin Camerer and Eric Johnson also found that expert decisions are often no more accurate than non-expert decisions and are much less accurate than using an automated decision procedure. The body of evidence arguing against expert consensus goes on and on, but I’ll spare you the endless barrage of one sentence summaries on their findings.

If experts who spend their entire lives studying specific disciplines only possess a small edge of predictive prowess over the layman, who should be cited in debate? This is a good question and one that not many have the answer to. One consideration is that there isn’t always a perfect solution: a glass-slipper, Goldilocks-just-right, one-size-fits-all silver bullet. We are imperfect beings to start; throw in extremely complex issues ripe with emotional responses and you can literally see objectivity waving goodbye.

Therefore, it’s troubling to observe people prophetically commenting on such issues ad nauseum with the certainty of Nostradamus. Arguments should be based on evidence, not expert status. And the reality is, most of the time the ideas proposed will be revisited and revised. This is good – it is a sign of progress. But we should at least understand the insignificance of our certainty. Opening ourselves up to the possibility of being wrong can only assist in the development of better logical understandings and arguments.

This does not mean, however, that to have an informed opinion, one must dig through the minutiae of research papers cluttered with esoteric jargon. It simply means exercising awareness of our limited capacity to fully understand complex issues.

IV.

Back to Greece once more. A group of intellectuals in Ancient Greek society called “sophists” (from the root word sophia meaning “skilled” or “wise”) were itinerant intellectuals who taught courses in various subjects to less educated people. They claimed that they could find the answer to all questions. Notably, they pioneered rhetoric to persuade pupils of their solutions. History ascribes this unscrupulous exploitation of customers through the term “sophism,” which is “a [fallacious] argument for displaying ingenuity in reasoning or for deceiving someone.”

These were the elites of Greek society, an intellectually superior breed. They had the answers to questions that ranged from the arts to the sciences, athletics, and physiology, as well as political quandaries. Biased historical recording plagues the true nature of their teachings, as Plato’s works are the main source of record for Sophist writings and thought – and Plato did not take kind to their brand of intellectual witchcraft.

The group of individuals to which this essay is geared towards is contemporary sophists. We have developed a new group of the brightest people who exert certainty on almost every issue. But is it not the least bit coincidental that throughout history, there has been one thread connecting many truly wise thinkers: intellectual modesty.

“Real knowledge is to know the extent of one's ignorance.” – Confucius

“Ignorance more frequently begets confidence than does knowledge.” – Charles Darwin

“Convictions are more dangerous enemies of truth than lies.” – Friedrich Nietzsche

“One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision.” – Bertrand Russell

Margin of error is a measure widely regarded in statistics. If you produce experimental results that seem ground-breaking, there is usually a confidence interval in which you can express these results. This concept is a form of mathematical reticence, acknowledging that no analysis is truly empirical and comprehensive.

Yet we hold expectations that policymakers play god and express unrelenting confidence – remember the confidence heuristic? – that a solution is iron-clad … no margin for error. And when, inevitably, some of these solutions do not pan-out as planned, we shame the people instead of considering what their thought process was.

Cultural norms do not reward occasional admission of wrongdoing. Politicians that are consistently wrong but apologize should not be lauded as heroes per se, but we have now acclimatized to an environment in which admitting any change and accepting any responsibility are political death marches. This applies to social interactions about politics as well – say the wrong thing once and do not bother trying to apologize; your bed is already made.


Useful debate seems to have gone the way of the Dodo bird; an extinct relic. Rigidity of confidence confines the soul to a desolate life of blindness. How can ideas progress when certainty sits front and center, unwilling to yield for any passersby? George Bernard Shaw famously remarked that, “Progress is impossible without change, and those who cannot change their minds cannot change anything.”

Debates should function similarly to negotiations: multiple participating parties striving for mutual benefit that often accompanies compromise. The alternative is division and political deadlock – which we are amid the throes of. Cicero once remarked that, “More is lost by indecision than wrong decision. Indecision is the thief of opportunity. It will steal you blind.” Division and polarization are the biggest contributors to stagnant policy that rob society of potential progress.

To shift away from indecision, charismatic parties – from the grassroot voices scattered throughout online forums to politicians themselves – must shift from sophist to sage. From blind certainty and elitism to margin of error and humanity.

In Plato’s Apology, Pythia – the oracle of Delphi – claims that Socrates is the wisest man in the world. “I seem then,” Socrates replied, “in just this little thing to be wiser than this man at any rate: that what I do not know I do not think I know either.” Elegance in such wisdom is seldom to be found. It may do well to reinstate the modest wisdom of the past and not ride the drunken buzz of knowledge.

As Alexander Pope wrote, apropos for today:

“A little learning is a dangerous thing; Drink deep, or taste not the Pierian spring:

There shallow draughts intoxicate the brain, And drinking largely sobers us again.

Fired at first sight with what the Muse imparts, In fearless youth we tempt the heights of Arts;

While from the bounded level of our mind; Short views we take, nor see the lengths behind.”

V.

On July 6th, 1974, Garrison Keillor of the Minnesota Public Radio (MPR) launched a new segment of the “A Prairie Home Companion” program. The new broadcast, “News from Lake Wobegon,” became a popular weekly monologue that Keillor erroneously claimed was about stories from his (fictitious) hometown, Lake Wobegon. The setting allowed for satirical and heartbreaking tales which captivated listeners.

The program’s legacy, however, is from the assigned trait of Keillor’s fellow townspeople. Describing it in the inaugural broadcast, he calls Lake Wobegon, “the little town that time forgot and the decades cannot improve ... where all the women are strong, all the men are good-looking, and all the children are above average.”

Over a decade later, in 1991, social psychologists Van Yperen and Buunk would publish groundbreaking research on “illusory superiority.” The researchers started noticing that people were developing positive illusions pertaining to their intellect. Empirically, individuals would overestimate their qualities and abilities when studies asked and then measured them in practice. The phenomenon in better known in popular culture as the “Lake Wobegon Effect,” for the apocryphal town where everyone was above average: they were smarter, sexier, funnier … everything.

While most of the research driving this field is focused on Americans, some studies show that this may not be the case in other cultures. In 2007, researchers from the University of British Columbia published a paper titled, “In Search of East Asian Self-Enhancement.” The study concluded that “Within cultures, Westerners showed a clear self-serving bias, whereas East Asians did not, with Asian Americans falling in between.”

The authors confirmed their hypothesis that cultural differences, in a great majority of cases, contribute to Western cultures overestimating their ability. Everyone believes they are better than average, and therefore, ascribes to the belief that the average is much lower than it really is.

Such was the case for McArthur Wheeler, who grossly underestimated his basic knowledge of chemistry. On April 19, 1995, Wheeler robbed two banks near his hometown in Pittsburgh. The robberies went as planned and he almost got away with it. Except, he overlooked one large factor: his intelligence.

Wheeler thought he was a pretty smart guy and had seen how lemon juice could be used to make invisible ink. So, in preparation for the increased security measures banks operate with, he applied a coating of lemon juice to his face, believing it would make his face not appear on the security cameras. As police later apprehended Wheeler, he sighed in disbelief, “But I wore the juice,” he said. About 80% of you might think this is a lie, but it is not. It is cold hard fact, yet another (extreme) example of how bad we are in assessing our own abilities.

The fight for intellectual honesty is a constant uphill sprint. We are not wired to remain objective and exercise humility; we are meant to prove our rightness at all costs. We certainly are not living in a society that truly accepts acknowledgement of wrongdoing. In tribal settings, result trumps intent, and admitting fault will only pin targets to your back. Today, people still cannot get away with uttering “Oops,” which leads us to sustain egocentricity and assume that we have all the right answers and are better than everyone else: welcome to Lake Wobegon.

VI.

There may only be a single word in our language that triggers the thought of one specific person: genius. The connection to that historical figure is so strong in fact that many of you know who it is in your head right now, without me having mentioned his name. Einstein is a zeitgeist synonym for the term genius, a man who saw the world at a different level of comprehension.

Einstein, the genius who redefined how we understand the universe, was a complicated man. His life was filled with stark contrasts. For example, growing up, Einstein always challenged authority and dogmatic thinking – this was the principal reason for his leaving Germany to study elsewhere as a teenager. This rebellious nature was self-cited as the chief reason for discovering such groundbreaking phenomena later in his life. What many fail to acknowledge about Einstein, however, was that, just like everybody else, he was a regular human being with all the accompanying flaws.

As a young adult, Einstein drew inspiration from opposing the status quo and authority in all facets of life. Later, he became the unrelenting authority whom others frequently challenged.

After Einstein’s miracle year in 1905, where he had published four groundbreaking papers while working as a Swiss patent clerk, the physics authorities rejected such extreme views. (Einstein was still largely unknown at the time and only in revising history has it become known as a “miracle year.”) Einstein held disdain for the established physicists who believed that their beliefs were fact and Einstein’s ideas were radical and inaccurate. In time, Einstein’s ideas were proven, delivering him to superstardom – the likes of which no scientist (and many world leaders) had ever seen before.

But alas, the field some of Einstein’s early theories had helped develop – quantum mechanics – would turn into his kryptonite. When Einstein moved to America and started working for the Institute for Advanced Studies in Princeton, New Jersey, young European physicists were gaining more insight into the quantum realm. Niels Bohr, Werner Heisenberg, and Erwin Schrödinger championed the Copenhagen interpretation of quantum physics, which Einstein flatly rejected. This interpretation submitted that the quantum world didn’t ascribe to the strict causality of Newtonian physics. Rather, that there were only probabilistic explanations.

Einstein’s entire life revolved around causality in the universe – that is what made it appear beautifully complex to him. Accordingly, Einstein did not take kindly to this interpretation of quantum physics, famously remarking that, “God doesn’t play dice with the universe.” (“Einstein,” Bohr responded, “stop telling God what to do.”)

This divergence created a rift in the scientific community, and Einstein, who’s lifelong belief was in the infinite causality of the universe, would try and poke holes in the theory. Throughout the decades, young physicists would properly rebuke these claims. But Einstein never, even when he died, truly accepted modern quantum theories. He acknowledged on many occasions that there were personal factors that forced him time and again to cut against the grain. “To punish me for my contempt of authority,” Einstein joked, “Fate has made me an authority myself.”

How could such a brilliant individual let personal dogma and rigid belief (overconfidence, in fact), supersede rational discussion? If it could happen to Einstein, it happens with every single person…


There is a nice connection between this story and the topic of this essay. It was not picked at random to tell the reader, “Einstein was fallible; therefore, we all are.” It happens to deal quite directly with the main focus of our certainty and overconfidence: politics.

The part of quantum mechanics that Einstein failed to accept was that in the quantum (infinitely small) realm, matter behaves as both particle and wave – two very distinct objects on typical scales. It is impossible to identify the state of such matter without measuring it. This is the classic example illustrated by Schrodinger’s cat. Until the box is opened, the cat is in a superposition of being either dead or alive. Its actual state cannot be known without observation; only a probability can be assigned.

Such holds true for politics. When there is argument of policy, there is not necessarily a right answer – just as there is no objective answer to the question, “Is the cat dead or alive?” The only thing we know is that there are some probabilities of it being a good or bad policy based on unbiased analysis and critical thought (and even then, there is no objective definition of such things).

Similar to quantum mechanics, we can only determine the state (or the effectiveness) of a policy in hindsight, when there has been some implementation and ample data to observe the impacts. As we know, no one has the hindsight to predict these outcomes, so we try our best to use rational thought and hold views or put in place policy that has the best chance; our best candidates for success. But then, we recalibrate our formulas and try again, piecemeal, until we get to a better place.

This, I posit, is the only objective truth in the world of policy and opinion. That there is nothing close to certainty in the daily arguments we have. That we are riddled with human error constantly betraying the better angels of our nature. Accordingly, you might have recorded an excessive use of the following terms in this essay: maybe, might, could, etc. Using these words is an acknowledgment of uncertainty, a way to still contribute without the detrimental egotism that bleeds through as charisma these days.

Many believe that William Butler Yeats’ “The Second Coming” was about an impending apocalyptic revolution. The words make the reader feel dizzy, uncomfortable, as if some ominous presence were waiting to pounce. Yeats had thought that the world was heading for hell, about to be taken over and rampaged with carnage. In such solemn times of worry, he makes a great observation. “The best,” he writes, “lack all conviction, while the worst are full of passionate intensity.”

Maybe the Second Coming is at hand right now. People are arguing for sport, to be right, rather than to reach common ground and progress. We judge people for their worst and look to diminish our fellow man. Good-hearted peers are silenced into complicity on all sides of the political spectrum, unable to voice opinions or ask genuine questions. Naïve conviction has set us on a path for demise. Many think that the American Experiment is breathing its dying breath.

“What rough beast, its hour come round at last, slouches towards Bethlehem to be born?” wondered Yeats. Maybe the answer is naïve conviction.

3 comments

Comments sorted by top scores.

comment by gjm · 2020-07-28T23:52:06.569Z · LW(p) · GW(p)

Obviously this is a very tangential issue, but when you say

There may only be a single word in our language that triggers the thought of one specific person: genius.

I can't agree. I bet there are plenty of people, who when they hear "genius", think first of other people besides Einstein. Even within physics, I bet the fact that there's a biography of Feynman called "Genius" encourages some to think of him rather than Einstein. And there will be plenty of people who will pick, say, Dante or Gauss instead.

Other possibly better candidate words:

  • Names of schools of thought etc. Presumably we shouldn't allow terms like "Marxism" or "Euclidean", but how about e.g. "objectivism" -> Rand, "communism" -> Marx, "cyberpunk" -> Gibson, "cubism" -> Picasso.
  • Words that happen to be names of famous works, or near to them. "utopia" -> More, "wasteland" -> Eliot (even though his poem's name isn't exactly that), "tempest" -> Shakespeare, etc.
  • Words closely associated with super-duper-famous people. If you want to make people think of Einstein, I suspect "relativity" does better than "genius". If you want to make people think of Jesus and "Christianity" is cheating, "gospel" and "resurrection" probably work well.

I'm pretty sure all of these work better than ("genius", Einstein) if the question is "if asked to associate a person with this word, what fraction of people will pick the person I have in mind?". I think some of them also work better if the question is "if you just say this word, what fraction of people will immediately think of the person I have in mind?".

Replies from: Deep Dives
comment by Noah Blaff (Deep Dives) · 2020-07-29T01:09:15.557Z · LW(p) · GW(p)

You're right, that may be an over-editorialized comment to make ... it was more for narrative impact but I do think there is a uniquely strong connection from that word to one person in popular vernacular.

comment by Alexey Lapitsky (alexey-lapitsky) · 2020-08-04T11:35:01.976Z · LW(p) · GW(p)

Thanks for the article!

How could such a brilliant individual let personal dogma and rigid belief (overconfidence, in fact), supersede rational discussion

I would not go as far as to say that his beliefs were not rational or dogmatic. One can also argue that Einstein's intuition was correct and that he was right to challenge the Copenhagen interpretation.