Sensation and Perception to Awareness: Leverhulme Doctoral Scholarship Programme


All available positions have now been taken up. The programme is no longer open to new applications.

Archive of project descriptions

Human-computer interaction and digital arts

Designing Multisensory Experiences within the Reality-Virtuality Continuum

Supervisor: Prof Marianna Obrist, & Prof Martin Yeomans,

Apply to the School of Engineering and Informatics

Interactive technologies such as Virtual and Augmented Reality (VR/AR) are transforming the ways in which people experience, interact and share information.  Advances in technology have made it possible to generate real and virtual environments with breath-taking graphics and high-fidelity audio.  However, without stimulating the other senses such as touch and smell, such experiences feel hollow and fictitious; they lack realism. For example, eating is engaging all our senses to create flavour experiences. There is a growing interest in using emerging technologies to create new eating/food experiences within Human-Computer Interaction but also psychology to enable basic explorations of sensory perception research.  This PhD project will 1) explore the use of novel technologies for creating novel multisensory experiences along the reality-virtuality continuum, and 2) study the effect of those technology on users experiences in a specific use scenario, such as eating. 

 A student working on this project should have programming skills (e.g. c++, c#, Unity) experience in sofrware and hardware integration especially VR/AR, and knowledge in user-centred design and methods.  A student working on this project will be supervised on the intersection between HCI and Psychology (Prof Obrist in the SCHI Lab - https://multi-sensory,info and Prof Yeomans in the SIBG - in order to ultimately inform the design of future multisensory experiences that can benefit a variety of use scenarios, from people with eating disorders (from over- to under-eating) to eating in older age (e.g. flavour augmentation), or creating novel eating experiences in emerging contexts (e.g. eating in future space travels).

Suggested readings:


Investigation of Smell Experiences applying Computational Techniques

Supervisor: Prof Marianna Obrist, & Dr Julie Weeds,

Apply to the School of Engineering and Informatics

The use of computational approaches has gained increased attention from sensory science researchers. Recent work has demonstrated that the perceptual qualities of odours can be predicted with high accuracy based on their chemical properties (i.e., odour molecules), thus enabling reverse-engineering of the percept of a particular molecule. The aim of this PhD project is to extract and formalise experiential properties of odours (i.e., olfactory stimuli) to generate a computational approach to odour experience design. The research will combine human-computer interaction (HCI), user experience (UX) methods with natural language processing (NLP) and machine learning (ML) techniques to translate human smell experiences into machine representations. A student working on this project will be supervised on the intersection between ML/NLP and HCI (Dr Weeds in the TAG Lab - and Prof Obrist in the SCHI Lab - in order to ultimately inform the design of future novel smell-based interfaces that help strengthen the field of multisensory experience design.

Investigating Feedback Musicianship

Supervisors: Dr Chris Kiefer,  & Dr Alice Eldridge

Apply to the School of Media, Film and Music

There is a growing interest in the creation of and performance with feedback instruments, musical instruments which are characterised by the recurrent circulation of signals, leading to non-linear and complex system dynamics (Eldridge and Keifer, 2017).  These dynamics create characteristic sonic outputs, and also mean that the more-or-less stable sensory-motor contingencies which underpin mastery of and performance with traditional instruments dissolve: there are dynamic, often unpredictable, rather than fixed relationships between physical gestures and sonic outcomes. These instruments possess 'a stimulation uncontrollability' (Ulfarsson 2019); compared to traditional performance practice, feedback musicianship is characterised by a more distributed agency between instrument and player.  Feedback instruments raise new questions around the nature of musician-machine interactivity, and provide a window into wider concerns of human engagement with real-world complex dynamical systems.  We welcome research proposals which investigate and advance our understanding of the experience of feedback musicianship; this could be approached through practice led research together with computational, psychological or physiological methods. Candidates should have an interest in and experience of the study of complex systems, a grounding in cognitive science and have experience in musical improvisation (in any idiom). A third supervisor will be sought according to the needs of the research proposal, in for example, philosophy, neuroscience or psychology.


Performing with Musical Machines

Supervisors: Prof Thor Magnusson,

Apply to the School of Music, Film and Media

Recent developments in music technology and artificial intelligence have made the prospect of collaborating and performing with virtual and real musical machines actual. Our systems can learn, evolve and demonstrate creative behaviour. But how do we perceive these machines? How do we talk about computational creativity in its diverse manifestations? What language is emerging here regarding agency, authenticity and authorship? This project will develop series of experiments in robotic and virtual computational creativity and conduct related user studies. It will also engage in discourse analysis of how labs and companies around the world are presenting their creative technologies. The project is an interdisciplinary study with one foot in computer science and the other in the humanities. 

Explorations in Robot Opera

Supervisor: Dr Evelyn Ficarra,

Apply to the School of Music, Film and Media

The idea of using the performing arts as a laboratory through which to explore human – robot interaction (HRI) is gaining traction as a way of energising both the arts and the development of artificial intelligence.  Note, for example, the recent Performing Robots conference in Utrecht in May 2019, which repeatedly posited the idea of theatre as a ‘test bed’ for HRI explorations. Opera is a form which involves a high level of musical, theatrical and visual creativity. Experimental forms of opera, especially devised music theatre and improvisational forms, could therefore be the ideal HRI ‘sand pit’. The abilities displayed by good musical / dramatic improvisers, for example (listening skills, a sense of timing and proportion, a commitment to team work, and ability to respond quickly and adapt to a changing situations, an ability to imaginatively co-create) could also be seen as transferable and highly prized social skills, in the development of AI. On the other hand, the influx of robotic and artificial intelligence technologies into the performing arts asks key questions about the nature and meanings of performance itself, as an augmented human experience. Which technologies will create the most compelling ‘operatic’ experience, and how might they re-define the form? Diverse proposals are welcome. Applicants should have a balance of expertise and/or strong interest in at least two of the following: music composition for opera or music theatre, robotics, music programming (e.g. of improvisational systems), artificial intelligence, machine learning, sensor driven technology.

Seeing with Sound: Visual-to-Auditory Sensory Subsitution for the Blind and Sighted

Supervisor: Dr Jamie Ward,

Apply to the School of Psychology

 The prospect of blind and visually impaired people ‘seeing with sound’ remains a largely unfulfilled promise.  This despite significant advances in mobile sensory technology and scientific evidence suggesting that the brain is capable of learning to interpret visual information conveyed by other senses.  Existing sensory substitution solutions often fail to convey the information that blind people are interested in (e.g. colour, depth), they are complex to learn, slow to respond, and suffer from poor aesthetics.  We have recently developed a novel mobile phone solution (SoundSight) that is highly flexible in terms of the visual information that is converted to sound, the range of sounds that can be played, and its responsiveness to movement of the sensor/object.  A PhD student working on this project could use the flexibility of the existing system to develop novel audio-visual mappings and explore them (qualitatively and quantitatively) with users in both real-world and controlled settings.  In addition to visual impairment, there are opportunities to develop multi-sensory augmented reality for the sighted.  The PhD student could come from one of several backgrounds (informatics, psychology, music technology) but should have an interest and aptitude in music.

Ward, J., Obrist, M., Alvarez, J., & Hamilton-Fletcher, G. (2019, November 20). SoundSight: A Mobile Sensory Substitution Device that Sonifies Colour, Distance, and Temperature. Retrieved from  doi: 10.31234/

Student-led proposals

We welcome proposals for student-led projects in human-computer interaction (HCI), music and digital arts.  Projects on HCI may research how multisensory experiences make a difference for how we design and interact with technology in the future.  Music proposals can investigate aspects of musical performance, perception or composition that adopt strongly interdisciplinary perspectives or methods.  How might contemporary research in the senses inform multimedia art practices?  How might novel creative applications of new technologies shed light on perception of music and/or computational arts? Projects engageing with computational media and virtual and augmented reality technologies are encouraged.

Potential supervisors include: Alice Eldridge, Evelyn Ficarra; Ed Hughes, Chris Kiefer, Thor Magnusson, Marianna Obrist.


Human Cognition and Cognitive neuroscience

Investigating Perceptual Presence using Virtual Reality

Supervisor: Dr Keisuke Suzuki,, & Dr David Schwartzman,, & Prof Anil Seth,

Apply to the School of Engineering and Informatics

We experience real-world objects as being more perceptually 'present' compared to mental imagery for a similar object. Sensorimotor Theories of Consciousness explain this phenomenon as due to our brain encoding knowledge about how afferent visual signals change given motor actions, such as eye movements.  Based on this theory, we have shown that disruptions to normal sensorimotor coupling lengthens access to awareness within a binocular suppression paradigm (Suzuki et al, 2019). However, there are many other factors that could potentially affect the perceptual presence of an object, such as the familiarity of the object, the dimensionality (3D vs 2D) (e.g. Korisky, 2018), other spatial and temporal aspects of sensorimotor coupling and intentional or automatic movements. This project will systematically investigate the factors that facilitate access to visual awareness using variations of continuous flash suppression paradigms. The project will also involve theoretical and practical work towards developing alternative methods of measuring perceptual presence.


Suzuki, Schwartzman, Augusto & Seth (2019) Sensorimotor contingency modulates breakthrough of virtual 3D objects during a breaking continuous flash suppression paradigm. Cognition.

Korisky, Hirschhorn & Mudrik. (2018) "Real-life" continuous flash suppression (CFS)-CFS with real-world objects using augmented reality goggles, Behavior Research Methods.

Seth (2014) A predictive processing theory of sensorimotor contingencies: explaining the puzzle of perceptual presence and its absence in synesthesia. Cognitive Neuroscience.

Noë (2014) Action in perception. Cambridge, MA: MIT Press.

Required Skill: Basic programming skills (MATLAB, R, Python), Experimental Design, Statistical Analysis.

Preferred Skill: Experience of programming in Unity or C#, Research experience with Virtual Reality.

Neural codes for auditory feature binding

Supervisors: Dr Ediz Sohoglu,

Apply to the School of Psychology

When listening, we effortlessly pick out multiple features from sound. So when listening to a piano, we can appreciate that the sounds convey a melody but also a rhythm. Feature binding refers to our ability to combine different features of sound (like melody and rhythm) into a coherent perceptual experience. This ability is thought to be essential for successful listening in cluttered environments where multiple sound sources are present (e.g. different speakers, traffic noise etc.) and the associated features need to be correctly assigned to each source.

The neural basis of feature binding is an enduring problem in cognitive neuroscience and a matter of longstanding debate. Previous fMRI evidence points to a role for posterior parietal cortex but critical questions remain. What are the temporal dynamics of auditory feature binding in parietal cortex? Are parietal contributions to feature binding entirely sensory-driven or are they dependent on high-level cognitive factors like the listener's attentional state and prior experience?

In this project, we will address these questions by using EEG to measure brain responses to changes in multiple sound features. Multivariate statistical methods will then be used to isolate neural signatures of auditory feature binding. Students should have a background in Psychology or other relevant discipline (e.g. Neurosciences, Engineering, Computer Science). Previous experience of computer programming, functional brain imaging (EEG, fMRI) or signal processing is desirable.


Sensing the first step

Supervisor: Dr Luc Berthouze,

Apply to the School of Engineering and Informatics

Initiating gait in humans is a complex action.  Taking the first step involves lower limb muscle activation of the stance limb that pushes the body’s centre of gravity into instability such that the body ‘falls’ forward and is caught by the stepping leg going through its swing phase into stance.  This process often breaks down in neurological disease and aging such that the subject is prone to falling and injury.  This project seeks to provide a new whole-body sensorimotor physiological functional understanding of the transition from stance to first step.

To tackle this question, various approaches will be considered, from computer modelling to physiological recording through experimental manipulations including interfering cognitive tasks and navigating virtual reality environments.  The student will be part of an exciting collaboration involving the Universities of Sussex, Copenhagen, Oxford and UCL.  Potential experimental paradigms will include simultaneous recordings of the following: Cortical and cerebellar OPM MEG (UCL), lower limb EMG, kinematics, wearable sensors and foot pressure recordings. More theoretical work will include causal and functional brain and brain-muscle network analysis, non-stationary time-series analysis as well as modelling paradigms such as predictive coding. 


Consciousness and the Predictive Brain

Supervisors: Prof Andy Clark, & Prof Anil Seth,

Apply to the School of Engineering and Informatics

Biological brains are increasingly cast as ‘prediction machines’: evolved organs whose core operating principle is to learn about the world by using stored knowledge to predict the incoming sensory signal. Human experience, this suggests, is never ‘raw’ – instead, it is always and everywhere the effect of combining incoming evidence with multi-level cascades of prediction.

In this project, we ask what this can tell us about the nature and origins of consciousness itself. What is it about some prediction regimes that delivers feelings of pain and pleasure, or that enables us to see the deep red of the sunset over the English Channel? And what is it that, in advanced agents, makes them so easily drawn towards dualist understandings of their own mental lives?

Another possible area of investigation asks what this can tell us about the phenomenology of the self. What is it about some prediction regimes that equips complex agents with the feeling that they are individual agents confronting a mind-independent world?  In what ways can those regimes misfire, delivering pathologies of the self?

Students should have a background in one or more of: cognitive philosophy, cognitive science, neuroscience, and psychology. As well as being part of the Leverhulme doctoral programme, the student will join a highly interdisciplinary research team based at the Sackler Centre for Consciousness Science.

Impact of lifestyle challenges on perception of bodily signals

Supervisor: Dr Charlotte Rae,

Apply to the School of Psychology

Many lifestyle challenges, such as not getting enough sleep, are increasingly prevalent and impacting on physical, neural, and mental health. This project will examine the consequences that working patterns (such as part-time work) and sleep duration have on cognition and wellbeing. The project could use human neuroimaging (MRI), physiological measurements (such as sleep monitoring with actigraphy), and tests of interoception (such as heartbeat perception), in healthy participants and in applied groups, such as sleep restricted individuals or students engaging in part-time work alongside study. We will test whether such lifestyle challenges affect perception of bodily signals, and the consequences of this for cognition and wellbeing. 

Common determinants of involuntary attention to sensory stimuli and mental representations

Supervisors: Dr Sophie Forster, & Prof Jamie Ward,

Apply to the School of Psychology

Over the past several decades, much research has characterised the process by which external sensory input can involuntarily capture attention and reach awareness. Another source of information that often appears to capture attention involuntarily is our own thoughts, yet this latter process has received very little research attention. Recent research suggests that common processes may be involved in directing attention to both internally and externally generated sensory representations (Forster & Lavie, 2009, Cognition; 2013, JEP: LMC). This raises the possibility that the same factors that make an external sensory stimulus ‘salient’ may also apply to internally-generated mental representations. Can we apply knowledge gleaned from the external selective attention literature to predict the occurrence of particular conscious thoughts? The project will combine behavioural and cognitive neuroscience research methods. A student working in this area should have a background in psychology or cognitive neuroscience and experience with programming is desirable.

Free energy agents: Modelling problem-solving in animals and machines

Supervisor: Dr Christopher Buckley, & Prof Anil Seth,

Apply to the School of Engineering and Informatics

Animals exhibit a remarkable ability to quickly and efficiently problem solve when faced with new challenges in their environment. Understanding the biological mechanisms underlying this ability represents a fundamental challenge in the brain sciences, and has potential to drive new bioinspired technologies in robotics and AI. 

This project will explore ‘Bayesian brain’ approaches to this challenge, grounded in the ‘free energy principle’. Converging theories across machine learning and neuroscience propose that approximate Bayesian inference schemes that rest on the minimization of an information-theoretic quantity called variational free energy, thereby providing a promising algorithmic account of problem-solving in biological agents (Friston 2017, Buckley 2017).  The successful candidate will develop this ‘free energy principle’ based approach using theoretical and agent-based methods to account for data in human psychophysical experiments, and to develop new algorithms for autonomous robots. Applicants should have a background in a quantitative science, experience with programming, and a strong desire to advance both machine learning and neuroscience research.

 Friston, K.J. et al., 2017. Active Inference, Curiosity and Insight. Neural computation, 29(10), pp.2633–2683.

 Buckley, C.L. et al., 2017. The free energy principle for action and perception: A mathematical review. Journal of mathematical psychology, 81, pp.55–79.


Understanding individual differences in sweet taste

Supervisor: Prof Martin Yeomans,

Apply to the School of Psychology

Contrary to the pervasive view that sweet tastes are universally liked, behavioural data clearly show three distinct phenotypic liking patterns, with similar proportions of extreme sweet likers, moderate sweet likers and sweet dislikers in populations in the UK, US and Asia.  However, the basis for these phenotypic differences remains unknown.  This studentship, supervised by Prof Yeomans in Psychology but involving collaborations with UCL and University of Brighton, would explore this in two ways: (a) by testing for the first time potential genetic differences between the three phenotypes, focusing on both genes known to be involved in peripheral taste and central reward mechanisms, and (b) using brain imaging to determine how the phenomenology of the sweet taste experience is reflected in differences in neural activation.

Conscious recollection, objective memory and cortical reinstatement

Supervisor: Dr Alexa Morcom,

Apply to the School of Psychology

My lab’s research focuses on conscious recollection of specific past events – episodic memory. Episodic memory is what enables us to ‘relive’ a specific experience in vivid detail – whether it  was an exciting birthday boat trip or last year or a routine meeting at work last week. We investigate these complex memories in the lab by creating short events using images, sounds and words and evoking them with simple cues. In the brain, the hippocampus is thought to trigger the reinstatement of the same cortical patterns that were present during the original events by a process called pattern completion. This reinstatement in turn is thought to underpin the conscious experience of reliving the past. But the subjective vividness of memories does not always track the availability of objective detail on memory tests, and cortical reinstatement has been detected when memories lack vividness and detail 1. In this project you will explore in more depth the relations between subjective recollection and objective measures of the content of memories, using functional magnetic resonance imaging (fMRI) or electroencephalographic (EEG) recordings of brain activity to assess reinstatement with traditional univariate as well as multivariate measures. Some key questions are: How are conceptual and perceptual detail reflected in the vividness of conscious recollection? How does cortical reinstatement track subjective and objective indices of memory content? Does reinstatement underpin conscious recollection in the same way when it is triggered by different memory cues? This work makes close contact with other current projects in the lab concerned with the effects of healthy ageing 2 and of memory cues 3 on consciousness, recollection and reinstatement. You will receive training in brain imaging methods, although prior experience will be an advantage, as will strong statistical and programming skills.

You can find more information about us at


Thakral, P. P., Wang, T. H. & Rugg, M. D. Decoding the content of recollection within the core recollection network and beyond. Cortex 91, 101–113 (2016).

Abdulrahman, H., Fletcher, P. C., Bullmore, E. & Morcom, A. M. Dopamine and memory dedifferentiation in aging. Neuroimage (2014). doi:10.1016/j.neuroimage.2015.03.031

Morcom, A. M. Mind Over Memory: Cuing the Aging Brain. Curr. Dir. Psychol. Sci. 25, (2016).

Integrated information and neural complexity for expected and unexpected stimuli

Supervisors: Dr Adam Barrett, & Prof Anil Seth,

Apply to the School of Engineering and Informatics

Integrated information and complexity theories of consciousness propose quantitative measures of brain activity that purport to distinguish conscious from unconscious states. These theories have received a lot of recent attention for some empirical success at distinguishing global states of consciousness, e.g. sleep stages and levels of anaesthesia. However, relatively little is known about how they relate to conscious versus unconscious perception. At the same time, prominent theories of perception – such as predictive coding, Bayesian brain and free energy theories – make few, if any, theoretical claims about what distinguishes conscious from unconscious perception. This project will explore how integrated information and complexity theories of consciousness might be used to extend ‘predictive’ theories of perception. One potential study will apply Lempel-Ziv complexity to EEG/MEG recordings from participants experiencing a range of stimuli about which prior expectations vary. Analysis will ask how expectations modulate the overall complexity of the neural response to stimuli, as well as its anatomical distribution. By manipulating low-level expectations, e.g. stimulus colour, as well as high-level expectations, e.g. identity of a face, and whether or not expectations are consciously experienced, we will explore how neural dynamical complexity is modulated during conscious and unconscious predictive processing. This project will suit a student with a strong quantitative background, looking to engage with experiments and data analysis, and with an interest in driving forward new theories.

The Role of Predictability in Conscious Awareness and Memory

Supervisors: Dr Ryan Scott, & Prof Chris Bird,

Apply to the School of Psychology

 In our lives we encounter situations that are familiar and predicable and others that are novel and unexpected. Our familiarity with a situation has dramatic effects on ongoing mental processes; we tend to be more alert and aware of our surroundings in an unfamiliar situation and more prone to introspection in a familiar one. But how general is this effect and what are its mechanisms? Are we better at detecting meaningful signals in an unfamiliar environment or just distracted by irrelevant details? Are we better equipped to draw on unconscious sources and learn new information in unfamiliar situations? And are there neural correlates of the predictability of a situation? To address these questions in naturalistic settings we will use 3-D videos and immersive virtual reality to situate people within predictable or unpredictable situations while conducting a variety of cognitive tasks. Analyses will seek to combine behavioural responses with neuroimaging data (fMRI and EEG). The project will involve both behavioural experiments and computer-based analyses.  Students should have a background in cognitive neuroscience, psychology or informatics.


Investigating cues to colour constancy using virtual reality

Supervisors: Dr Jenny Bosten, & Prof Anna Franklin,

Apply to the School of Psychology

The colours of objects appear to remain stable despite large changes in the colour of the light reflected from them depending on the colour of the illumination. This ability, known as colour constancy, has many associated mechanisms, from 'low-level' colour contrast of an object with its background, to 'high-level' effects from memories of particular objects' colours. Colour constancy is typically studies in labs using very restricted stimuli, for example, 2D coloured patches on backgrounds. Virtual reality offers a way to study the mechanisms of colour constancy using the precise manipulations of psychophysics, but for stimuli embedded in naturalistic 3D scenes. In order to build the most complete picture yet of how colour constancy functions, the project will investigate the relative importance of the different mechanisms for constancy by sequentially eliminating different cues from the virtual scenes and measuring their effects of perception.

Visual tuning to environmental statistics during early development

Supervisors: Dr Jenny Bosten, & Prof Anna Franklin,

Apply to the School of Psychology

In many respects, the human visual system appears to be optimized for representing the visual information contained in natural scenes. Is this optimization genetic, having evolved over many generations, or is it ontogenetic, meaning that each individual learns to optimally represent their own environment via early interactions with the visual world? Research on the visual abilities of human infants offers a way to explore the answer to this question. We will investigate the tuning of the perceptual abilities of human infants to the visual statistics of natural scenes, investigating how the tuning develops with time. To do this we use a variety of methods: eye tracking, psychophysics and EEG. In particular we plan to use steady-state visually evoked potentials (SSVEPs) which are an excellent way of probing the neural mechanisms of vision in infants, efficient, with high signal:noise ratios and without the need for measuring behavioural responses.

The top-down generation of perceptual experience

Supervisor: Prof Anil Seth, 

Apply to the School of Engineering and Informatics

An influential framework for investigating perception conceives of the brain as a 'prediction machine'.  In this view, conscious perceptual content is determined by the brain's "best guess" of the causes of sensory inputs, and is shaped or constituted by predictions or expectancies about these causes. This flexible project will investigate the neural dynamics underlying (human) conscious perception via one or more of a number of related approaches. In one approach, imaginative suggestion (previously studied in the context of ‘hypnosis’) will be used to induce perceptual experiences. The nature of such experiences will then be examined psychophysically and modelled using Bayesian computational models. A second approach will focus on the role of the visual alpha rhythm, combining techniques including stroboscopic stimulation and analysis of ‘perceptual echoes’ to uncover how alpha-range oscillations mediate predictive perception. (The perceptual echo is a long-lasting alpha-band reverberation in the visual EEG response.) A third approach will use transcranial magnetic stimulation (TMS) in combination with EEG to identify causal influences of on conscious perceptual content. Applications are welcome that expand on any or all of these areas, or which develop other areas which focus on the ‘top-down generation of perceptual experience’ – using computational, psychophysical, and/or imaging approaches. Students should have a background in cognitive neuroscience, psychology, engineering, or physics - with experience of computational methods being highly desirable. As well as being part of the Leverhulme doctoral programme, the student will join a highly interdisciplinary research team based at the Sackler Centre for Consciousness Science.

Student-led proposals

We welcome proposals for student-led projects using methods of cognitive and computational neuroscience to investigate issues in sensation, perception, and awareness.  These methods may include but are not limited to neuroimaging, psychophysics, and behavioural studies, virtual and augmented reality, and psychophysiology.  Which computational aspects of active predictive perception shape visual awareness?  What are the neural signatures of unusual perceptual experiences?  How are internal and external sensory signals integrated to create a sense of self?  Project proposals should identify an appropriate supervisor and clearly explain the research question and proposed methodology, as well as relevance to the Leverhulme programme agenda.

Potential supervisors include: Jenny Bosten, Hugo Critchley, Zoltan Dienes, Sophie Forster, Anna Franklin, Sarah Garfinkel, Warrick Roseboom, Ryan Scott, Anil  Seth, Natasha Sigala, Julia Simner, Jamie Ward, Luc Berthouze, Andy Clark, Zoltan Dienes, Sarah Garfinkel, Alexa Morcom, Charlotte Rae, Ediz Sohoglu, Martin Yeomans.


Sensory Neuroscience

Vision at the origin of vertebrate life

Supervisor: Prof Thomas Baden,,

Apply to Sussex Neuroscience

The retinal basis of vision in all vertebrates, from ancient sharks to modern humans, is based on a common circuit blueprint that first evolved more than 500 million years ago during the Cambrian explosion - right at the origin of vertebrate life itself (Lamb et al. 2007). However, we know next to nothing about the function of these ancient retinas, which survive in "living fossils" such as lampreys or primitive sharks to the present day. Instead, our current understanding of the retinal basis of vision largely based on species that emerged much later, most notably mice (Baden et al, 2016); Bae et al. 2018; Franke et al, 2017; Shekar et al. 2016). Do retinal circuits in mice, or in our own eyes, resemble those in ancient species? Understanding the functions and circuit motifs present in ancient lineages will be fundamental to our understanding of vision in general and will allow putting our extensive knowledge on mammals in a broad evolutionary context (Baden et al. 2019).

The project seeks to build the first functional understanding of retinal circuits for vision in species of lampreys and cartilaginous fish (rays and sharks). Fore this, we will capitalise on state-of-the-art and high-throughput multielectrode recordings from 1,000s of individual neurons in retinal explants using established techniques in the lab. In parallel, as required we will also explore functional, structural and molecular fingerprints of key retinal circuits using bleeding edge techniques such as 2-photon microscopy, transcriptomics, and serial section electron microscopy. Animals will be sourced from the local SeaLife Centre in Brighton and/or local fishery industry. Together, the project will establish the first large-scale survey of circuits of vision in the oldest living vertebrates on the planet.

 Key References

Baden T, Berens P, Franke K, Román Rosón M, Bethge M, Euler T. 2016. The functional diversity of retinal ganglion cells in the mouse. Nature. 529(7586):345–50

Baden T, Euler T, Berens P. 2019. Retinal circuits for vision across species. Nat. Rev. Neurosci. in press:

Bae JA, Mu S, Kim JS, Turner NL, Tartavull I, et al. 2018. Digital Museum of Retinal Ganglion Cells with Dense Anatomy and Physiology. Cell. 173(5):1293-1306.e19

Franke K, Berens P, Schubert T, Bethge M, Euler T, Baden T. 2017. Inhibition decorrelates visual feature representations in the inner retina. Nature. 542(7642):439–44

Lamb TD, Collin SP, Pugh EN, Jr. 2007. Evolution of the vertebrate eye: Opsins, photoreceptors, retina and eye cup.  Nat. Rev. Neurosci. 8(12):960–76

Shekhar K, Lapan SW, Whitney IE, Tran NM, Macosko EZ, et al. 2016. Comprehensive Classification of Retinal Bipolar Neurons by Single-Cell Transcriptomics.  Cell. 166(5):


Early visual processing

Supervisors: Prof Leon Lagnado

Apply to the School of Life Sciences

The retina and brain of zebrafish provide an excellent context in which to study how neural circuits process sensory information: we can finely control the input to the circuits while observing the activity of neurons and synapses within them. We achieve this by using multiphoton microscopy to image fluorescent reporter proteins in neurons as we present visual stimuli.  This approach has allowed us to analyse how excitatory and inhibitory neurons within the retinal network contribute to basic computations underlying vision, such as the detection of motion or the orientation of a spatial feature (see    

Several projects are available.  One will investigate adaptation within the retinal circuit – changes in the way that visual stimuli are processed according to the recent history of activity.  Changes in the properties of synapses within the retina play a key role in such “network adaptation” and these changes can in turn be caused by the release of neuromodulators such as dopamine.  We will ask how these substances modify the synaptic transmission of the visual signal through the retinal network and the information that is sent back to visual centers in the brain.  A second project will focus on how neuromodulators act on the visual centers themselves.  


A genetic approach to Sensory Neuroscience

Supervisors: Prof Claudio Alonso, & Prof Thomas Nowotny,

Apply to the School of Life Sciences

The brain integrates information from a range of sensory modalities and uses it in combination with previously stored knowledge to generate a behavioural response. Given that all circuit components of behaviour –i.e. sensory neurons, interneurons, motor neurons and their synaptic connections– are constructed under the direction of the genes, mutations affecting the structure or function of any of these components are likely to alter behaviour and offer a window into the molecular processes that underlie brain function.

Here we use a genetic approach to investigate the function of sensory neurons involved in signalling body position and proprioception, exploiting the simplicity and genetic accessibility of the fruit fly brain. We build on a discovery recently made in our lab (Picao-Osorio et al. 2015 Science; Issa et al 2019 Current Biology) that mutation of a single fruit fly gene can affect a complex behaviour termed self-righting, a motor sequence that is evolutionarily conserved all the way from insects to humans and that allows a subject to return to its normal position if turned upside-down. Furthermore, using state-of-the-art neural connectomics approaches (Schneider-Mizell et al. 2016 eLife) we have mapped the cellular circuitry underlying self-righting and identified the sets of sensory neurons that enable the animal to determine its posture, opening an opportunity to explore the mechanisms of sensory function and perception in a genetically-tractable model.

This interdisciplinary project will use the self-righting system (and other motor paradigms) to investigate the mechanisms by which sensory neurons mediate posture control. It builds on the complementary strengths and expertise of the Alonso1-3 and Nowotny5-7 labs in molecular and computational neuroscience, respectively. We will combine modern quantitative behavioural approaches and genetic and optogenetic manipulations of single sensory neurons to establish the signals they detect and how this information affects the activity of the self-righting neural circuit. In particular, we will apply single-cell transcriptomics, advanced optical neural imaging using 2-photon microscopy, optogenetics and computational modelling to generate causal models of the self-righting and other motor circuits in Drosophila larvae. All in all, the project will take us all the way from the gene to sensory function and contribute to the understanding of the molecular basis of perception.


Incentive salience learning and perceptual bias

Supervisor: Dr Hans S. Crombag,, & Dr Eisuke Koya

Apply to the School of Psychology

Incentive salience attribution is an associative learning mechanism by which neutral stimuli or cues, as a function of repeated pairing with rewarding or otherwise biologically relevant events, become endowed with motivational significance.  In this way, reward-paired cues become more salient, stand out from the background to attract attention, and become highly wanted or desired. Surprisingly, the extent to which incentive salience attribution may involve mechanisms at the perceptual/attentional level, i.e. how visual or auditory incentive cues are more likely detected or perceived via top-down control mechanisms, is largely unknown. However, a small neuroscience literature in humans is developing (e.g. Hickey and Peelen, J Neurosci 2015), supported by earlier work on attentional bias effects from many experimental psychology labs.

Studies in laboratory animals allow for more in-depth examinations of underlying brain mechanisms of perception and attention. To measure incentive learning effects, researchers typically use measures, such as approach behaviour or lever pressing, but fail to dissociate perceptual, attentional processes from motivational ones; the proposed project seeks to remedy this. Using both established and/or newly developed signal detection-based methods, the project will examine the effects of (reward-based) incentive learning on visual or auditory perception thresholds and biases, and establish whether fluctuations in incentive value (e.g. as a result of devaluation, hunger), are expressed at the perceptual level. This project will involve a combination of modern neuroscience approaches, including fibre photometry, optogenetics, and ex vivo electrophysiology, to precisely characterise underlying neurobiological mechanisms.


Neural control circuits for perceptual tuning of sensory inputs and behaviour selection

Supervisor: Prof Kevin Staras, & Prof George Kemenes,

Apply to the School of Life Sciences

All animals must compute appropriate behavioural responses to external sensory inputs based on their current internal (motivational) state. For example, hungry animals substantially alter their perceived value of potential food stimuli and change their behavioural priorities accordingly, increasing their sensitivity to food-cues, promoting foraging behaviours at the expense of a greater risk of predation, decreasing expression of non-essential behaviours (eg. reproduction) and raising their tolerance to aversive inputs. How the nervous system coordinates this remarkable reconfiguration of diverse behaviour-generating networks is completely unknown. This project will take advantage of the experimentally-accessible circuits in the mollusc Lymnaea, an animal which performs complex state-dependent perceptual decision-making (Crossley et al. 2016, 2018) and whose networks underlying its principal behaviours (ingestion, egestion, locomotion, defensive-withdrawal, reproduction, respiration, and heart control) have been extensively characterized. You will use state-of-the-art fluorescence imaging, multi-electrode array technology, intracellular recording methods and machine-learning behavioural analysis (DeepLabCut) to characterize how the nervous system alters the valency of received sensory input to dynamically choose between and express different and often antagonistic behaviours. This project, which will suit a student with good technical skills and a strong interest in neuroscience and computation, is an exciting opportunity to exploit state-of-the-art methods to reveal fundamental events in sensory processing and perceptual decision-making.

Crossley M, Staras K, Kemenes G. (2018) A central control circuit for encoding perceived food value. Sci Adv. 2018 4(11):eaau9180.

Crossley M, Staras K, Kemenes G. (2016) A two-neuron system for adaptive goal-directed decision-making in Lymnaea. Nat Commun. 2016 7:11793.

Active learning in visual navigation

Supervisors: Prof Andrew Philippides, & Prof. Paul Graham,

Apply to the School of Life Sciences

Perception is not a passive problem. Animals actively explore their environments to acquire the information they need to guide future behaviour. This can be seen in disparate systems from the beautifully choreographed learning flights of bees (Philippides et al., J. Exp Biol, 2013) to eye movements of humans (Land and Hayhoe, Vis. Res., 2001). As such, the details of how active vision is tuned to task, environment and visual system is an important theoretical question. We are seeking a student to explore these questions both experimentally and theoretically, the details of which will be guided by the student but can range from insects to humans or even autonomous robots. A good applicant will have a technical background and interest in bio-inspired solutions, or a biological background with some technical skills and a desire to improve them, and will join a strong multi-disciplinary research team.


Student-led proposals

We welcome student-led project proposals that examine basic mechanisms of sensory-coding, through to the study of neural circuits and their computational principles.  New techniques such as optical (multiphoton) microscopy can study complex neural circuits in naturalistic freely-moving environments (real or virtual) in species such as mice and zebrafish.  These methods allow us to characterise, analyse and model the distributed neural circuits that allow behaving animals to process the sensory signals caused by both the environment and their own actions.  Student-led proposals could explore cross-overs from biology to robotics and AI, where machines learn to actively deploy sensory modalities of vision, taste, smell, touch and sound.

Potential supervisors include: Tom Baden, Chris Buckley, Hans Crombag, Paul Graham, Catherine Hall, Eisuke Koya, Leon Lagnado, Miguel Maravall, Thomas Nowotny, Jeremy Niven, Andy Philippides, Bryan Singer, Kevin Staras, Claudio Alonso.