We all like to think we have a good grasp of what we know and what we don’t, yet research suggests that this grasp is generally pretty flimsy. The illusory superiority bias is one of the more commonly encountered of the psychological biases, and whilst in many instances our misjudgements are fairly harmless, in the workplace having an accurate gauge of what we know is rather useful.
Doctors, for instance, need to know the limits of their knowledge in order to make safe decisions. It isn’t so much a case of being right all the time, but rather understanding what we don’t know and the reliability of our own knowledge. It’s what’s known in the industry as metacognitive accuracy.
Knowing What we Know
A recent paper set out to test whether our metacognitive accuracy is affected by the conditions under which we rack our brain. For instance, if we’re rewarded for the right answer (or punished for the wrong one), does that make our judgement any better?
The researchers conducted a couple of experiments in which participants were required to answer a number of multiple choice questions on a range of general knowledge type topics. For each question, they were required to say both their #1 answer, but also whether they wished to submit that as their final answer. Positive answers were rewarded with points, whereas wrong answers would see points deducted, so knowing the limits of your knowledge was key.
Interestingly, it emerged that the most accurate understanding of our own knowledge came when the threat of punishment was at its highest. When the players would lose four points for an incorrect answer, they produced by far the most accurate responses. Indeed, this was even more than the game in which players would receive four points for a correct answer, so the threat of punishment was more useful than the allure of reward.
Now you might say that it is purely rational for players to become more cautious when they are threatened with high punishments. The important thing to consider, however, is that the number of correct answers they submitted only dropped by a small amount. Instead, it was the withholding of wrong answers that increased significantly..
So, in other words, the punishment for getting answers wrong served to improve the ability of the participants in understanding what they knew, and what they didn’t.
As with most studies, it’s important to remember the context, and therefore not to read too much into the results, and hopefully these findings will be tested further in more real-life settings. Nevertheless, the findings are fascinating.
"The positive impact of punishment on strategic regulation may be ripe for widespread application, especially in areas where training effective metacognitive discrimination is vital - such as medicine and business," the researchers conclude.
If the findings prove to be robust, then we may begin to see this thinking applied in a number of different areas, whether in academia or the workplace. Can you think of any environments where this thinking could be particularly useful? Your thoughts and comments below please...