Centre for Computational Neuroscience and Robotics - CCNR

be.AI - Project Ideas

Supervisors have formulated project proposals that they have identified. They are listed below and can be used as the basis of your own PhD project proposal. Please click on the titles to see more details.

Modelling the first step

Supervisors: Prof Luc Berthouze, Dr Simon Farmer, Queen Square, UCL

Apply for a PhD in Informatics

Initiating gait in humans is a complex action. Taking the first step involves lower limb muscle activation of the stance limb that pushes the body's centre of gravity into instability such that the body "falls" forward and is caught by the stepping leg going through its swing phase into stance. This process often breaks down in neurological disease and aging such that the subject is prone to falling and injury. This project seeks to provide a new whole-body sensorimotor physiological functional understanding of the transition from stance to first step.

To tackle this question, various approaches will be considered, from computer modelling to physiological recording through experimental manipulations including interfering cognitive tasks and navigating virtual reality environments. The student will be part of an exciting collaboration involving the Universities of Sussex, Copenhagen, Oxford and UCL. Potential experimental paradigms will include simultaneous recordings of the following: Cortical and cerebellar OPM MEG (UCL), lower limb EMG, kinematics, wearable sensors and foot pressure recordings. More theoretical work will include causal and functional brain and brain-muscle network analysis, non-stationary time-series analysis as well as modelling paradigms such as predictive coding.

[1] Rosin, R, Topka, H, Dichgans, J, 1997. Gait initiation in parkinson’s disease. Movement Disorders 12, 682–690.

[2] Boto, E, Holmes, N, Leggett, J, Roberts, G, Shah, V, Meyer, SS, Muñoz, LD, Mullinger, KJ, Tierney, TM, Bestmann, S, Barnes, GR, Bowtell, R, Brookes, MJ, 2018. Moving magnetoencephalography towards real-world applications with a wearable system. Nature 555, 657–661.

[3] Liu J, Sheng Y, Liu H, 2019. Corticomuscular coherence and its applications: a review. Frontiers in Human Neuroscience. 13:100.

Investigating Feedback Musicianship

Supervisors: Chris Kiefer, Alice Eldridge and Phil Husbands

Apply for a PhD in Music Technologies

Feedback musical instruments are characterised by the recurrent circulation of signals, leading to non-linear and complex system dynamics (Ulfarsson 2019; Eldridge and Kiefer, 2017); this results in a characteristic sound and unpredictable behaviour in which musical agency is distributed across instrument & performer. Feedback instruments raise new questions around the nature of musician-machine interactivity, and provide a window into wider concerns of human engagement with real-world complex dynamical systems.

We welcome research proposals that investigate and advance feedback musicianship; this could be approached through practice-led research together with computational, psychological or physiological methods. Candidates should have an interest in experimental music and complex systems and expertise in one or the other.

Cognitive presence and agency – potential functions for consciousness?

Supervisors: Dr Warrick Roseboom and one or more of Dr Chris Buckley, Dr Simon Bowes, Prof Anil Seth, and Prof Zoltan Dienes

Apply for a PhD in Cognitive Science or Informatics

The feelings of being present and responsible for our actions are fundamental aspects of conscious experience. When and whether something is labelled as an agent and its acts attributed as such, remain tricky and unresolved issues. How can we measure and interpret the subjective experience of being an agent? What are the potential benefits of endowing AI systems with the ability to ‘perceive’ agency and how this might be done? When are AIs perceived by humans as having agency? Does the feeling of being present and responsible modify interactions with the world in substantive and definable fashions (e.g. change learning profiles/outcomes)? Relevant skills depend on topic and we welcome proposals across disciplines from cognitive science to applications in machine learning.

Background reading: Suzuki K, Lush P, Seth AK, Roseboom W. Intentional Binding Without Intentional Action. Psychological Science. 2019;30(6):842-853

Understanding human time perception – applications in human health and AI interaction

Supervisors: Dr Warrick Roseboom and one or more of Prof Chris Bird (Sussex), Dr Sam Berens (Sussex), Dr Kaoru Amano (CiNet, Japan), Dr Zafeirios Fountas (Huawei/UCL)

Apply for a PhD in Cognitive Science or Informatics

The time perception team at Sussex is invested in understanding the cognitive and neural underpinnings of human time perception and memory for applications in health (e.g. digital memory augmentation in dementia) and human interactions with artificial systems (e.g. see TIMESTORM project). Potential topics extend from cognitive neuroscience projects to understand the cognitive and neural bases of human time perception and memory, to projects attempting to endow AI with human-like temporal abilities. See [1] for a recent example of relevant work. Required skills depend on topic and we welcome proposals across disciplines from cognitive science to applications in machine learning.

[1] Roseboom, W., Fountas, Z., Nikiforou, K. et al. Activity in perceptual classification networks as a basis for human subjective time perception. Nat Commun 10, 267 (2019)

Building bio-mimetic algorithms by injecting function into brain models

Supervisors: James Knight, Thomas Nowotny and Paul Graham

Apply for a PhD in Informatics

Animals have evolved to solve complex problems efficiently. The human brain only uses about 20 Watts and is still competing against deep networks that need MWh to train. Even insects with miniature brains can achieve astounding performance in tasks including long distance navigation and pattern recognition. In this project, you will combine recent advances in training spiking neural networks [1,2] with insect-inspired anatomy and behaviour to advance our understanding of this area and, more broadly, whether SNNs can be constrained by brain anatomy to build more efficient AI. The project can be developed in multiple directions, including additional insect behavioural experiments, further development of SNN learning rules, or robotic applications. The ideal candidate would have experience in computational Neuroscience and/or machine learning, good programming skills and a keen interest in SNNs.

[1] G Bellec, F Scherr, A Subramoney, E Hajek, D Salaj, R Legenstein & W Maass (2020). A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications, 11(1), 3625.

[2] E Neftci, H Mostafa, & F Zenke (2019). Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine, 36(6), 51–63.

How does locomotion modulate sensory processing?

Supervisors: Prof Leon Lagnado and Dr Paul Pichler

Apply for a PhD in Neuroscience

Sensory stimuli drive motor behaviour but the act of locomotion can itself modulate how those stimuli are processed. We still do not understand the nature of this modulation or the underlying neural circuitry. This project will use the lateral line system of zebrafish to investigate the flow of signals through circuits that process mechanical information from the periphery during a behaviour called rheotaxis, where the fish stabilizes its body in relation to surrounding water flow. This will help us understand neural computations at the interface of sensory processing and motor control, potentially informing the design of robotic control systems.

[1] Goldblatt, D.S. and Schoppik, D., 2020. Efference Copies: Hair Cells Are the Link. Current Biology, 30(1), pp.R10-R12.

[2] Paul Pichler and Leon Lagnado (2020). Motor behaviour selectively inhibits hair cells activated by forward motion in the lateral line of Zebrafish. Current Biology, 30(1): 150-157.

[3] P Pichler & L Lagnado (2019). The transfer characteristics of hair cells encoding mechanical stimuli in the lateral line of zebrafish. Journal of Neuroscience, 39(1) :112-124.

Evolving Deep Neural Networks

Supervisors: Dr Phil Birch, Dr Rupert Young, Prof Chris Chatwin

Apply for a PhD in Engineering

Working out what people are doing in a crowded video feed is a challenging computer vision task, but has many applications in medical care, health and safety, and human-computer interactions. Deep learning networks such as Siamese and convolutional networks are used to achieve this [1]. However, networks designed by hand have some limitations. They can have limited accuracy or become too slow. Recently, new research has started looking into designing deep learning networks using evolutionary algorithms. The network is evolved over multiple generations to produce an optimal design [2]. This project will use this idea to design networks that are much more efficient and yet produce state-of-the-art results.

[1] “Deep Learning for Person Re-identification: A Survey and Outlook”, Mang Ye et al.(2020)

[2] “Evolving Deep Convolutional Neural Networks for Image Classification”, Yanan Sun et al. (2019)

Understanding crowd psychology and safety through wearable technologies

Supervisors: Prof John Drury, Prof Daniel Roggen

Apply for a PhD in Psychology

Analysing and modelling the behaviour of people in crowds can help safely manage live events, improve response to emergencies, and inform the design of public spaces [1]. Wearable and mobile sensing now allows us to sense behaviour "in the wild" and in real-time [2] [3] [4] [5]. This project would bring together novel wearable and mobile sensing with concepts from group psychology to understand co-action and coordinated behaviour in pedestrian flow. The ideal student will have strong computing skills and a background in computer science, maths, physics or psychology. They will be interested in research at the interface between psychology and computing.

[1] Kleinmeier, Köster and Drury, Agent-based simulation of collective cooperation: from experiment to model. J. R. Soc. Interface, 2020

[2] Wirz et al. Probing crowd density through smartphones in city-scale mass gatherings. EPJ Data Science, 2(5):1-24, 2013.

[3] Wang et al. Enabling reproducible research in sensor-based transportation mode recognition with the Sussex-Huawei dataset. IEEE Access, 2019

[4] Chavarriaga et al. The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition. Pattern Recognition Letters, 2013

[5] Templeton, A., Drury, J., Philippides, A. (2018). Walking together: Behavioural signatures of psychological crowds. Royal Society Open Science 5, 180172.

Computational neurophenomenology – how neural mechanisms shape perceptual experience

Supervisors: Prof Anil Seth, and one or more of Dr Chris Buckley Dr Warrick Roseboom Dr Ivor Simpson Dr David Schwartzman Prof Andy Clark

Apply for a PhD in Informatics or Cognitive Science

This project explores machine learning and artificial intelligence (ML/AI) approaches to simulating properties of perceptual experience (‘phenomenology’), and connecting these properties to underlying neural mechanisms. The core idea is that perceptual experience depends on neurally-encoded predictions about the causes of sensory signals. However, current computational models of this process do not readily account for richness and range of perceptual phenomenology. I welcome proposals which bridge disciplines to advance this research. Areas of potential focus include generative models of visual ‘hallucinatory’ experience, and ML/AI algorithms incorporating aspects of ‘attention’. The project will include opportunities to test models using behavioural and neuroimaging data, and exploration of philosophical implications.

The project will suit a candidate with a strong background in computational neuroscience and/or machine learning, with knowledge of cognitive neuroscience and an interest in consciousness research.

[1] Hohwy, J., and Seth, A.K. (in press). Predictive processing as a systematic basis for identifying the neural correlates of consciousness. Philosophy and the Mind Sciences

[2] Tschantz, A., Seth, A.K., and Buckley, C.L. (2020). Learning action-oriented models through active inference. PLoS Computational Biology. 16(4):e1007805

[3] Suzuki, K., Roseboom, W., Schwartzman, D.J., and Seth, A.K. (2017). The hallucination machine: A novel method for studying the phenomenology of visual hallucination. Scientific Reports 7(1):15982

Teaching deep learning machines to remember like humans do

Supervisors: Dr Viktoriia Sharmanska, Prof Thomas Nowotny

Apply for a PhD in Informatics

There is an open problem in deep learning - neural networks are predominantly amnesiac, i.e. they forget past tasks as soon as they face a new learning task [1]. The aim of this project is to seek new solutions for curing amnesiac neural networks based on knowledge about human memory. According to cognitive neuroscience, humans are good at remembering information based on emotional content [2,3], e.g. pictures of emotional character (especially excitement) influence long-term recognition memory in humans significantly more than neutral images.

The research question in this project is: can we design neural networks that mimic human long-term recognition memory? To address it we will need to a) define and assess the ‘memory’ of modern neural networks, b) devise new models inspired by the properties of human memory, and c) establish a protocol for hypothesis testing, e.g., using curriculum learning [4] from easy-to-remember to hard-to-remember tasks, and cross-dataset learning [5] from image and video datasets.

[1] Li Z, Hoiem D: Learning without forgetting (2016) ECCV. arXiv

[2] Burke A, Heuer F, Reisberg D. Remembering emotional events (1992), Memory & Cognition.

[3] Marchewka A, Wypych M, Moslehi A, et al. Arousal Rather than Basic Emotions Influence Long-Term Recognition Memory in Humans (2016) Frontiers in Behavioral Neuroscience.

[4] Pentina A, Sharmanska V, Lampert CH. Curriculum learning of multiple tasks (2015), CVPR.

[5] Sharmanska V, Quadrianto N. Learning from the Mistakes of Others: Matching Errors in Cross Dataset Learning (2016), CVPR.

Materiality, Technology, and Mind

Supervisors: Prof Andy Clark, Dr Beatrice Fazi

Apply for a PhD in Philosophy

Studying contemporary cognitive systems involves exploring the complex relation between biology and technology, for instance addressing how they can merge (Clark, 2004) but also remain distinct (Fazi, 2019). This project tackles these questions by developing interdisciplinary approaches to materiality and mind. One area of interest is how biological intelligence and artificial intelligence can work together in ways that respect constraints of safety and transparency. Another is how biological brains and material structures (towns, artefacts, technologies) interact, each shaping and being shaped by the other over many temporal scales, and how that can enhance or diminish the space of human possibility. Another possible focus is the way neural prediction machinery engages changing technologies, and what artificial intelligence could learn from biologically plausible process models such as predictive processing and active inference. Students with a background in any relevant discipline (for example, philosophy, cognitive science, artificial intelligence, media theory or socio-technological systems theory) are encouraged to apply.

[1] Clark, Andy (2004) Natural-born Cyborgs: Minds, Technologies, and the Future of Human Intelligence. Oxford: Oxford University Press.

[2] Fazi, M. Beatrice (2019) ‘Can a Machine Think (Anything New)? Automation Beyond Simulation’, AI & Society, 34(4): 813–824.

[3] Clark, Andy (2016) Surfing Uncertainty: Prediction, Action, and Embodied Mind. New York: Oxford University Press.

Individual differences in spectral processing in fish: nature or nurture?

Supervisors: Prof Tom Baden, Dr Jenny Bosten

Apply for a PhD in Neuroscience

How animals perceive the world is dictated by the make-up of their sensory organs as well as all the subsequent stages of neural processing. Here, activation patterns across receptor neurons dictate the fundamental limits of what can and cannot distinguished, but beyond this, differential information must also be retained, processed, and interpreted. All this requires neural circuits, and this project sets out to investigate if, how, and why these circuits can differ between individuals of the same species (Linneweber et al. 2020; Stern et al. 2017).

To get at this basic problem, we will focus on the colour vision system of the experimentally amenable larval zebrafish. Previous work and ongoing technological developments in our labs allow us to measure high-resolution in-vivo spectral tunings of any neuron(s) in the zebrafish visual pathway, from photoreceptors to higher order brain circuits (Bartel et al. 2020; Janiak et al. 2019; Yoshimatsu et al. 2020). Here, preliminary data indicates substantial individual differences between individual animals in the way that they process spectral information. Drawing on the powerful experimental toolkit available in zebrafish genetics and optophysiology, as well as behaviour and computational modelling of individuals’ receptoral and postreceptoral colour representations (Boehm et al. 2014) and spectral discrimination thresholds (Vorobyev & Osorio 1998), we aim to understand what it is within the zebrafish visual brain that leads to the observed differences – genetic variation between individuals (“Nature”) or developmental variation and/or experience dependent plasticity (“Nurture”)?

Bartel P, Janiak FK, Osorio D, Baden T. 2020. Colourfulness as a possible measure of object proximity in the larval zebrafish brain. bioRxiv. 2020.12.03.410241

Boehm AE, MacLeod DIA, Bosten JM. 2014. Compensation for red-green contrast loss in anomalous trichromats. J. Vis.

Janiak FK, Bartel P, Bale M, T Y, Komulainen EH, et al. 2019. Divergent excitation two photon microscopy for 3D random access mesoscale imaging at single cell resolution. bioRxiv. 821405

Linneweber GA, Andriatsilavo M, Dutta SB, Bengochea M, Hellbruegge L, et al. 2020. A neurodevelopmental origin of behavioral individuality in the Drosophila visual system. Science (80-. ).

Stern S, Kirst C, Bargmann CI. 2017. Neuromodulatory Control of Long-Term Behavioral Patterns and Individuality across Development. Cell

Vorobyev M, Osorio D. 1998. Receptor noise as a determinant of colour threshoIds. Proc. R. Soc. B Biol. Sci.

Yoshimatsu T, Bartel P, Schröder C, Janiak FK, St-Pierre F, et al. 2020. Near-optimal rotation of colour space by zebrafish cones in vivo. bioRxiv

Diverse, controllable music generation

Supervisors: Chris Kiefer, Dr Ivor Simpson, Prof Thor Magnusson

Apply for a PhD in Music Composition

This project investigates how to build controllable creative AI models that can generate music. Music is a widespread cultural activity that combines elements of structure and improvisation to build a coherent experience. There have been significant successes in this area, and now we look to the next generation of models to enable flexible and intuitive creative manipulation and broader generative diversity. This interdisciplinary PhD, supervised between Music and Informatics, will explore these issues and investigate: how to learn models that create coherent and diverse music; what components provide appropriate inductive biases; mechanisms for creative manipulation and how to evaluate model success.

Huang, Cheng-Zhi Anna, et al. "Music transformer: Generating music with long-term structure." ICLR 2018.

Dhariwal, Prafulla, et al. "Jukebox: A generative model for music." arXiv preprint 2020.

Probabilistic inference and control in animals and machines

Supervisors: Dr Christopher Buckley, and one ore more of Prof Anil Seth, Prof Leon Lagnado, Dr Arash Moradinegade Dizqah

Apply for a PhD in Informatics

Animals are able to thrive in noisy and uncertain environments. Converging theory in the brain sciences suggests that they achieve this by operating as probabilistic inference machines [1,2,3]. On this view perception, action and learning can all be understood as a minimisation of the divergence between an inferred distribution over environmental states and a desired target distribution. This information theoretic starting point underpins modern algorithms in machine learning (e.g., intrinsic measures, maximum entropy reinforcement learning), current theory in the cognitive sciences,(e.g., active inference), and process theories in neuroscience (e.g., predictive coding and optimal control). While the major focus of the project will be on modelling and theory in this area, we particularly encourage proposals that demonstrate an ambition to ground ideas in either experimental neuroscience, machine learning practice or robotics. Applicants should have a background in a quantitative science. Experience with math and programming is essential. This project will also involve potential research exchanges with the Center for Human Nature, Artificial Intelligence, and Neuroscience (CHAINO) Hokkaido University, Japan.

[1] Pouget, A., Beck, J., Ma, W. et al. Probabilistic brains: knowns and unknowns. Nat Neurosci 16, 1170–1178 (2013).

[2] Karl J. Friston, Marco Lin, Christopher D. Frith, Giovanni Pezzulo, J. Allan Hobson, and Sasha Ondobaka. Active Inference, Curiosity and Insight. Neural Computation 2017 29:10, 2633-2683

[3] A. Tschantz, M. Baltieri, A. K. Seth and C. L. Buckley, "Scaling Active Inference," 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom, 2020, pp. 1-8

Evolving neural circuits throughout life

Supervisors: Dr Takeshi Yoshimatsu, Dr Andrew Penn

Apply for a PhD in Neuroscience

Animals’ nervous systems are not a static machines, but rather dynamic and constantly evolving throughout life to adjust for body growth, physical strengthening, and new social interactions [1]. Because cell genesis is complete during development, this evolution in the nervous system is accomplished without adding new components into the system. Focusing on zebrafish visual circuits, this project investigates how the functions of existing neural circuits evolve throughout life, thus gaining mechanistic insights into the flexibility of biological intelligent systems. You will employ cutting-edge molecular biology methods including CRISPR/Cas9 technology to generate genome modified animals [2] and state-of-art two-photon microscopy [3] to monitor function and dynamic synaptic remodeling and neuronal activity in living animals.

[1] Blakemore SJ, Choudhury S. (2006) Development of the adolescent brain: implications for executive function and social cognition. J Child Psychol Psychiatry. 47(3-4):296-312

[2] Kimura Y, Hisano Y, Kawahara A, Higashijima S.(2014) Efficient generation of knock-in transgenic zebrafish carrying reporter/driver genes by CRISPR/Cas9-mediated genome engineering. Sci Rep. 2014 Oct 8;4:6545.

[3] Janiak FK, Bartel P, Bale M, T Y, Komulainen EH, et al. 2019. Divergent excitation two photon microscopy for 3D random access mesoscale imaging at single cell resolution. bioRxiv. 821405

How does sensory cortex embed information about actions and outcomes?

Supervisors: Dr Andre Maia Chagas, Prof Miguel Maravall

Apply for a PhD in Neuroscience

How does sensory cortex embed information about actions and outcomes? A textbook view of sensory processing in the mammalian brain, which has often inspired hierarchical architectures for information processing, holds that neurons in sensory pathways up into the cerebral cortex respond to sensory features and that experience-dependent learning primarily refines these sensory responses. This view is challenged by recent discoveries from our group and others [1], which have shown that goal-directed task learning can cause neurons even in “sensory” areas of the cortex to display complex responses, reflecting associations between relevant stimulus properties, the actions that the animal performs in the context of the task, and the outcomes of those actions. The engagement and contributions of different cortical areas depend on task details and demands, even when sensory input is identical (e.g. [2]). This suggests that sensory cortex is not purely a “front end” with a succession of filters passing evidence to higher areas, but is actively involved in task processing: as a consequence, an animal will not, in effect, sense the world independently of what it needs to feel in order to guide behaviour. This framework for understanding sensory cortex may have significant consequences for the artificial emulation of biological intelligence.

This project will explore the neuronal circuits underlying these complex responses, and how their connections depend on task properties. You will train mice to perform different sensory-guided tasks, altering the relationship of a sensory stimulus to potential actions; you will record and manipulate neuronal activity using state-of-the-art methods [3] and build and optimise your own equipment and tools to control and measure mouse behaviour. The project will emphasise the use of open hardware approaches to lab equipment development and dissemination [4]: we particularly encourage applications from students who are interested in this exciting and powerful methodological approach and have a quantitative background.

[1] Bale MR, Bitzidou M, Giusto E, Kinghorn P, Maravall M (2020) Sequence Learning Induces Selectivity to Multiple Task Parameters in Mouse Somatosensory Cortex. Current Biology

[2] Pinto L, Rajan K, DePasquale B, Thiberge SY, Tank DW, Brody CD (2019) Task-Dependent Changes in the Large-Scale Dynamics and Necessity of Cortical Regions. Neuron 104, 810-824.

[3] Janiak FK, Bartel P, Bale MR, … Maravall M, Baden T (2019) Divergent excitation two photon microscopy for 3D random access mesoscale imaging at single cell resolution. bioRxiv

[4] Maia Chagas A (2018) Haves and have nots must find a better way: The case for open scientific hardware. PLoS Biol 16(9): e3000014.

How does the early visual system influence scene encoding and navigation?

Supervisors: Prof Jeremy Niven, Prof Paul GrahamDr Cornelia Bühlmann

Apply for a PhD in Neuroscience

Many animals use learnt visual cues to navigate through their environment to acquire resources and return home afterward. Learning about visual cues that can be used for future navigation depends on the properties of the early visual system and how visual information is encoded. This project will use the wood ant as a model system in which to investigate the relationship between retinal processing and the encoding of visual scenes that are learnt for future navigation. In doing so, it will help us understand how sensory systems are related to learning, memory, and behaviour. The insights will be significant for computer vision and robotics, as well as biology and neuroscience.

[1] Buehlmann C, Wozniak B, Goulard R, Webb B, Graham P, Niven JE. (2020). Mushroom bodies are required for learned visual navigation, but not for innate visual behavior, in ants. Current Biology 30: 3438-3443.

[2] Knaden M, Graham P. (2016). The Sensory Ecology of Ant Navigation: From Natural Environments to Neural Mechanisms. Annual Review of Entomology61: 63-76.

[3] Buehlmann C, Woodgate JL, Collett TS. (2016). On the encoding of panoramic visual scenes in navigating wood ants. Current Biology 26: 2022-2027.

Integrated information: from neural correlate to computational correlate of consciousness

Supervisors: Dr Adam Barrett, Prof Anil SethDr Christopher Buckley

Apply for a PhD in Informatics

Integrated information and complexity theories of consciousness relate key properties of conscious experience to certain forms of information dynamics [1]. Strikingly, empirical neural markers of these information dynamics successfully index global states of consciousness, e.g., sleep stages and levels of anaesthesia. However, these theories remain short on hypotheses about the relationship between consciousness and computation, cognition, perception and/or intelligence. Meanwhile, prominent computational theories of perception, action, and cognition – such as predictive coding, Bayesian brain and the free energy principle – make few, if any, theoretical claims about what distinguishes conscious from unconscious mental states [2]. This project will build bridges between these theories, starting from either or both ends, and encompassing modelling and/or empirical studies. Potential directions include (i) applying `integrated information decomposition’ [3] to relate different modes and time-courses of information flow to different conscious contents; or (ii) examining how (conscious and unconscious) expectations modulate information dynamics during perception and action.

The project will suit a candidate with a strong background in mathematics, computational neuroscience, and/or machine learning, with an interest in consciousness research.

[1] Mediano, P.A.M., Seth, A.K., & Barrett, A.B. (2019). Measuring integrated information: Comparison of candidate measures in theory and simulation. Entropy 21, 17

[2] Hohwy, J., & Seth, A.K. (in press). Predictive processing as a systematic basis for identifying the neural correlates of consciousness. Philosophy and the Mind Sciences

[3] Rosas, F.E., Mediano, P.A.M., Jensen, H.J., Seth, A.K., Barrett, A.B., Carhart-Harris, R.L., & Bor, D. (in press). Reconciling emergences: An information-theoretic approach to identify causal emergence in multivariate data. PLoS Computational Biology (pre-print: ArXiv 2004.08220)

Meta plasticity, functional reconfiguration and morphological processing in motor behaviours: embodied models

Supervisors: Prof Phil Husbands, Prof Andy PhilippidesDr Chris Johnson

Apply for a PhD in Informatics

The ways in which multiple adaptive processes, operating at different temporal and spatial scales, interact in the nervous system and the body are not well understand, but have a crucial role in the generation of behaviour. Modulatory mechanisms (including diffusing neuromodulators) play an important part in such interactions, e.g. in meta-plasticity (the plasticity of plasticity) and in the operation of reconfiguring, multi-functional networks. Complex network dynamics, including oscillatory and chaotic dynamics, can also give rise to, and arise from, such interactions. Such processes are not confined to the nervous system. Recent work has shown how information processing can be shared between the body and the nervous system.

[1] Shim, Y. and Husbands, P. (2019) Embodied Neuromechanical Chaos through Homeostatic Regulation, Chaos 29(3):033123

[2] Johnson, C., Philippides, A. and Husbands, P. (2016) Active Shape Discrimination with Compliant Bodies as Reservoir Computers, Artificial Life 22(2):241-268

[3] M. O’Shea, P. Husbands, A. Philippides (2015) Nitric Oxide Neuromodulation. In Dieter Jaeger and Ranu Jung (Eds), Encyclopedia of Computational Neuroscience, vol. 3, 2087-2100, New York: Springer Reference.

[4] K. Briggman and W. Kristan (2008) Multi-functional pattern generating circuits, Ann. Rev. Neurosci. 31:271-294.

Modelling spatiotemporal uncertainty in perception and navigation

Supervisors: Dr Ivor Simpson, Prof Andy PhilippidesDr Christopher Buckley

Apply for a PhD in Informatics

Real-world inferences from spatiotemporal sensors, such as photographic or neuromorphic cameras, are often afflicted by ambiguities and uncertainties, e.g. how far away/what is that object [1,2]? Accurate modelling of these uncertainties may have significant downstream effects on avoiding hazards and selecting routes in animals and robots [3,4]. This project explores AI/ML for describing uncertainty from spatiotemporal datasets. Potential research areas include: Methods for well-calibrated and robust uncertainty estimation; combining predictions over time; estimating behaviour of other objects; integrating uncertainty into decision making and learning models of environments; and investigating biological plausibility and embodied deployment of any of the previous points.

Depending on interests spatiotemporal datasets could come from different sensors and platforms including neuromorphic cameras and mobile robots while there is also potential to test hypotheses in insects

[1] Kendall, Alex, and Yarin Gal. "What uncertainties do we need in bayesian deep learning for computer vision?." NeurIPS 2017.

[2] Dorta, Garoe, et al. "Structured uncertainty prediction networks." CVPR 2018.

[3] Graham, Paul, and Andrew Philippides. "Vision for navigation: what can we learn from ants?" Arthropod Structure & Development 46.5 (2017): 718-722.

[4] Claussmann, Laurène, et al. "A review of motion planning for highway autonomous driving." IEEE Transactions on Intelligent Transportation Systems 21.5 (2019): 1826-1848

Embodied Trustworthy AI Agents

Supervisors: Dr Novi Quadrianto, Prof Andy PhilippidesProf Thomas Nowotny

Apply for a PhD in Informatics

Machine learning is already involved in decision-making processes that affect peoples’ lives. Efficiency can be improved, costs can be reduced, and personalization of services and products can be greatly enhanced. However, concerns are rising about how to ensure that deployment of automated systems will follow clear, useful principles and requirements of trustworthiness [1]. One example of this is algorithmic fairness in a dynamic setting [2] which has a time-varying component so is best dealt with as a situated and embodied problem. In this project, you will therefore combine our initial work on fairness, transparency, and robustness [3] with symbol grounding, embodiment and situatedness needing a physical body [4] contributing to the work of the PAL lab [5] as well as being a member of the be.AI Leverhulme centre.

[1] High-Level Expert Group on Artificial Intelligence, Ethics guidelines for trustworthy AI, November 2020.

[2] A. D’Amour, H. Srinivasan, J. Atwood, P. Baljekar, D. Sculley, and Y. Halpern. Fairness is not static: deeper understanding of long term fairness via simulation studies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 525–534, 2020.

[3] ERC Starting grant: Bayesian Models and Algorithms for Fairness and Transparency

[4] ActiveAI - active learning and selective attention for robust, transparent and efficient AI

[5] PAL lab

Bio-inspired sensory systems for rapid active learning

Supervisors: Prof Paul Graham, Prof Thomas Nowotny, Prof Andy Philippides

Apply for a PhD in Informatics

Small brained insects are highly proficient at many of the tasks that remain difficult for robots, but especially in the speed and robustness of their learning abilities. In contrast to AI methods which generally take long times to train and large amounts of labelled data, insects are expert at fast, adaptive and robust sensory-motor control, rapid learners of visual and olfactory information for long distance navigation and exploration. What if we could give robots these abilities, by mimicking insects’ sensors, neural circuits and behaviours? In our research we combine biological experiments with computational neuroscience and robotic modelling to gain insight into the brains and learning mechanisms of insects and in so doing, design novel bio-inspired controllers for autonomous robots [1,2].

We are looking for talented students to join us at Sussex and are happy to hear from applicants interested in any aspect of this work or with their own ideas in bio-inspired robotics and AI or computational neuroscience.

[1] Baddeley, B., Graham, P., Husbands, P., & Philippides, A. (2012). A model of ant route navigation driven by scene familiarity. PLoS Comput Biol, 8(1), e1002336.

[2] Knight, J. C., Sakhapov, D., Domcsek, N., Dewar, A. D., Graham, P., Nowotny, T., & Philippides, A. (2019, July). Insect-Inspired Visual Navigation On-Board an Autonomous Robot: Real-World Routes Encoded in a Single Layer Network. Proc ALife 19, 60-67.

Putting Computational Neuroscience to work

Supervisors: Prof Thomas Nowotny, Prof Andy Philippides, Prof Paul Graham

Apply for a PhD in Informatics

In recent years, there has been a lot of progress in Computational Neuroscience. Continuous attractor models and decision making circuits, sparse coding and 3-factor learning rules have all helped shed light on neural circuit function and the mechanisms of learning. However, there is a growing acceptance that in order to understand how adaptive behaviour is generated, we can’t treat neural circuits as isolated models but need to consider their dynamic interaction with bodies and the environment. We believe that this will not only provide insight into natural intelligence but also inspire novel artificial intelligence. As we now have the computational tools to simulate large-scale neural models efficiently [1,2] and on small devices [3], we would like to hear from people who want to push the boundaries of computational neuroscience models by applying them to dynamic/embodied problems.

[1] Knight, J. C., & Nowotny, T. (2020). Larger GPU-accelerated brain simulations with procedural connectivity. BioRxiv. doi: 10.1101/2020.04.27.063693.

[2] J. C. Knight, T. Nowotny (2018) GPUs outperform current HPC and neuromorphic solutions in terms of speed and energy when simulating a highly-connected cortical model. Front Neurosci 12:941.

[3] Knight, J. C., Sakhapov, D., Domcsek, N., Dewar, A. D., Graham, P., Nowotny, T., & Philippides, A. (2019, July). Insect-Inspired Visual Navigation On-Board an Autonomous Robot: Real-World Routes Encoded in a Single Layer Network. Proc ALife 19, 60-67.

Swarm exploration: multi-robot visual navigation through insect-inspired strategies

Supervisors: Prof Andy Philippides, Prof Paul Graham, Prof Thomas Nowotny

Apply for a PhD in Informatics

Navigation is a vital task for autonomous robots, whether searching a disaster zone or exploring a planet. Bees are champion navigators, travelling miles for food efficiently and accurately: what if we could give robots the navigation abilities of a bee? And what if we could share the learnt knowledge in multi-robot teams? For instance, flying robots could rapidly survey an area, identify a person in difficulty and pass their knowledge to bigger, but slower, wheeled robots that can offer assistance. Building on our work at Sussex [1,2] you will develop bio-inspired algorithms that allow such navigation in heterogeneous robot teams, opening up a new area of robot autonomy.

[1] Baddeley, B., Graham, P., Husbands, P., & Philippides, A. (2012). A model of ant route navigation driven by scene familiarity. PLoS Comput Biol, 8(1), e1002336.

[2] Dewar, A., Graham, P., Nowotny, T., & Philippides, A. (2020). Exploring the robustness of insect-inspired visual navigation for flying robots. In Proc. Alife 2020: 668-677.

Human-robot skill transfer through physical interaction

Supervisors: Dr Yanan Li and Dr Nicolas Herzig

Apply for a PhD in Engineering

Learning from human demonstrations is an important approach that allows a robot to be deployed quickly in unstructured environments. Rather than feeding the task knowledge into a robot using machine commands, programming by demonstration (PbD) [1] facilitates human-robot skill transfer by enabling a human to guide a robot through a kinesthetic or tele-operation interface. However, traditional PbD does not allow intuitive task modification and requires extensive offline demonstrations. By combining machine learning and control theory, this project aims at developing an efficient, intuitive PbD interface, with a robotic learning algorithm mimicking human learning as observed in human-human interaction [2].

[1] A Billard et al. (2008), Robot Programming by Demonstration. In: Siciliano B., Khatib O. (eds) Springer Handbook of Robotics. Springer, Berlin, Heidelberg.

[2] V. Duchaine and C. Gosselin (2009), Safe, Stable and Intuitive Control for Physical Human-Robot Interaction, IEEE International Conference on Robotics and Automation, Kobe, pp. 3383-3388.

Mapping the origins of movement and motor coordination

Supervisors: Prof Claudio Alonso, Dr Lucia Prieto-Godino, Prof Luc Berthouze

Apply for a PhD in Neuroscience

Movement is a defining trait of animals and robotic systems, but how the neural networks in the developing animal brain manage to adopt their specific connectivities, and generate and modulate their activities to control movement, is currently unknown. In particular, we still do not know what controls the transitions between the uncoordinated motor activities in developing embryos, into the precise patterns of movement observed in fully-formed organisms. This project will exploit the simplicity and genetic accessibility of the fruit fly Drosophila melanogaster to investigate the origins of animal movement, and use computational modelling to generate testable hypotheses on circuit structure and coordination. For this we will combine modern experimental work (genetic, optogenetic, advanced microscopy, deep neural network quantitative behavioural analysis), with state-of-the-art modelling approaches (neural selection, signal processing, time-series and dynamical systems analyses). This genuinely interdisciplinary effort will thus help us define the mechanisms underlying the emergence of coordinated movement, and extract core principles for the design of effective robotic locomotion and adaptation.

[1] Picao-Osorio, J, Johnston, J., Landgraf, M., Berni, J. and Alonso, C.R. (2015) microRNA encoded behavior in Drosophila, Science 350:815-20

[2] Issa, A.R., Picao-Osorio, J., Rito, N., Chiappe, M.E. and Alonso, C.R. (2019) A Single MicroRNA-Hox Gene Module Controls Equivalent Movements in Biomechanically Distinct Forms of Drosophila, Current Biology 29:2665-2675.e4.

[3] Prieto-Godino, L., Diegelmann, S. and Bate, M. (2012) Embryonic Origin of Olfactory Circuitry in Drosophila: Contact and Activity-Mediated Interactions Pattern Connectivity in the Antennal Lobe, PLoS Biology 10(10):e1001400.

[4] Berthouze, L. and Goldfield E.G. (2008) Assembly, tuning, and transfer of action systems in infants and robots, Infant and Child Development 17:25-42.

[5] Hartley, C, Farmer, S. and Berthouze, L. (2020) Temporal ordering of input modulates connectivity formation in a developmental neuronal network model of the cortex, PLoS One 15(1):e0226772.

[6] Loveless. J., Garner, A., Issa, A.R., Roberts, R., Webb, B., Prieto-Godino, L., Ohyama, T. and Alonso, C.R. (2020) A physical theory of larval Drosophila behaviour BioRxiv doi.org/10.1101/2020.08.25.266163.

Text Logo for be.AI

How to Apply | Apply Now | Suggested Supervisors | Project Ideas | People in be.AI