Sackler Centre for Consciousness Science

Theory and Modelling

Understanding consciousness requires new theory as well as new experiments.

By bridging disciplines from mathematics to psychiatry, the Sackler Centre is uniquely well placed to advance theory and experiment together.

 
Explanatory correlates of consciousness

Key to our work is the notion of explanatory correlate of consciousness: a description of a brain process that somehow accounts for specific aspects of conscious experience.

Complexity

Conscious experiences are both differentiated (each is one among a vast repertoire of possibilities) and integrated (each conscious scene is unified).  This suggests the underlying neural processes should also be simultaneously differentiated and integrated – i.e., ‘neurally complex’. We develop and apply mathematical measures and models for this based on concepts such as Lempel-Ziv complexity, Shannon entropy, synergy, causal density and integrated information. We have analysed anaesthesia and sleep data, as well as data from drug-induced psychedelic states. We are also seeing whether our measures can help us better understand loss of consciousness in coma and the vegetative state.

Predictive Processing (Bayesian brain)

It is thought that we perceive what the brain predicts as the causes of sensory signals. We are extending the theory of predictive processing to include interoception (the sense of the body from within), to account for emotion and subjective feeling states.  Further, we are developing theory of 'counterfactual' predictive processing, explaining aspects of experience such as 'objecthood' and 'presence' by the brain making predictions about possible actions. We also work on the mathematics of predictive coding, with reference to the 'free energy principle'.

Causality analysis

The key to linking brain activity to consciousness – or to any brain function – is to be able to decipher causal interactions among different brain regions from neuroimaging data. One useful tool for doing this is Granger causality. One signal A is said to ‘Granger cause’ a different signal B if A contains information that helps predict the future of B, over and above information already in the past of B. We have been pioneering the theory and application of Granger causality to data from neuroscience. We have written the standard analysis software in the field, which is freely available as a fully-documented MATLAB toolbox, and we are examining how the method can be applied to fMRI and EEG.