This is an archive page

Bulletin

Sussex study reveals how ‘blind insight’ confounds logic

People can gauge the accuracy of their decisions, even if their decision-making performance itself is no better than chance, according to new University of Sussex research.

In a study, people who showed chance-level decision making still reported greater confidence about decisions that turned out to be accurate and less confidence about decisions that turned out to be inaccurate.

The findings, published in Psychological Science, suggest that the participants must have had some unconscious insight into their decision making, even though they failed to use the knowledge in making their original decision, a phenomenon the researchers call “blind insight.”

Lead author, psychologist Dr Ryan Scott says: “The existence of blind insight tells us that our knowledge of the likely accuracy of our decisions – our  ‘metacognition’ – does not always derive directly from the same information used to make those decisions. It appears our confidence can confound logic.”

Metacognition, the ability to think about and evaluate our own mental processes, plays a fundamental role in memory, learning, self-regulation, social interaction, and signals marked differences in mental states, such as with certain mental illnesses or states of consciousness.

Consciousness research reveals many instances in which people are able to make accurate decisions without knowing it, that is, in the absence of metacognition. The most famous example of this is blindsight, in which people are able to discriminate visual stimuli even though they report that they can’t see the stimuli and that their discrimination judgments are mere guesses. 

Dr Scott and colleagues at the University’s Sackler Centre for Consciousness Science wanted to know whether the opposite scenario to blindsight could also occur. He says: “We wondered: Can a person lack accuracy in their decisions but still be more confident when their decision is right than when it’s wrong?”

The researchers looked at the data of 450 participants performing a simple decision task.  The participants first viewed a set of letter strings which, unknown to the participants, followed a complex set of rules specifying the order of letters. 

They were then told of the existence of these rules and asked to classify a new set of strings according to whether or not they obeyed the same rules, answering yes or no.  After each decision they had to indicate whether or not they had any confidence in their answer.

The researchers found that, while the majority of the participants were able to classify the strings with some accuracy, a large subset performed no better than if they had selected yes or no at random. However, looking at the confidence ratings for that subset of ‘random responders’ revealed that they were more likely to express confidence in their right decisions than in their wrong ones.

In other words, the participants knew when they were wrong, despite being unable to make accurate judgments.