During a recent seminar, I overheard a professor irately recall a viva he had just taken. “I’ve spent half my life leading powerplant projects. This kid can hardly distinguish a chimney from a cooling tower, and I know he’s never even been to a plant, far from interning in one. But he dares to tell me that designs have changed nowadays! Based on what? Google Images!”

Clearly, increased awareness of the powers of the Internet has had its impact on fields whose knowledge was hitherto limited to the dingy corners of large libraries. While the positive effects of making a vast repository of knowledge conveniently available to the general public cannot be understated, neither can the illusoriness of the sense of knowledgeability created by access to an unlimited supply of information.

Nowhere is this illusion more evident than in medicine. One in four people self-diagnoses as an substitute for visiting the doctor. Most of this self-diagnosis takes place on the Internet, where multiple websites are happy to provide tools to identify one’s disease through a series of multiple-choice questions. Online symptom-checkers have been found to be accurate only 34% of the time, with the correct diagnosis appearing among the top twenty results in less than 60% of all cases. The availability of symptomological data on the Internet undoubtedly makes the user feel empowered enough to be able to question diagnoses of doctors with multiple years of study and practice.

This confidence forms the basis of the Dunning-Kruger effect, a cognitive bias whereby people are unable to assess the limits of their abilities. In terms of knowledge, the less you know, the more you think you do. As your knowledge increases, so does your acceptance of its limits until you have enough of it to gauge more accurately how much you know. In an age fuelled by Wikipedia and driven by Google, the relevance of facts has gradually been undermined by popular belief. Unattributed data from WhatsApp forwards are treated with the same respect as evidence from multimillion-dollar studies. Opinions, based on emotional leanings, and derived from a scan of the top-ten Google search results are given the same value as scholarly arguments ripened by years of experience.

In 2015, a study conducted in the U.S. asked respondents their opinion on bombing Agrabah. 55% of Democrats and 43% Republicans had a for-or-against opinion on the matter. Agrabah is a fictional city from Disney’s Aladdin. In our post-truth world where alternative facts abound, demonstrable evidence is increasingly losing its ability to impact people’s entrenched opinions. Anti-intellectualism has made its most robust global comeback since the end of the Second World War, as multiple dispensations across the world dismiss scholarly opinions and data in favour of ideology. Even in subjects with as clearly verifiable effects as those of global warming and vaccination, millions have been unwilling to cede ground when faced with strong expert opinion against their own beliefs (often backed up by selectively-Googled articles).

While the demise of expertise is hardly the end of experts themselves, it represents the total capitulation of trust in their judgement. Experts themselves are partly to blame — errors in judgement on their part have caused phenomena as devastating as the financial crisis of 2008, and they are often culpable of extending their title to domains they have little knowledge of. Even so, today every aspect of our daily life is governed by devices and processes that are way beyond our understanding. We live in complicated times, menacingly inclined against specialists, yet unable to do without them.

When knowledge and evidence cease to matter when faced with feelings, beliefs and propaganda, the societies that emerge are based on fabrication, misrepresentation and untruth. It falls on the general public, experts included, to protect against this dystopia, and re-establish trust in science, evidence and rationality.

Source: Read Full Article