Research

  • Machine Learning: Systematicity, Creativity and Metacognition
    • Many of the limitations of neural networks identified in the late 80s and early 90s (e.g., lack of systematicity and productivity) still apply to today’s Deep Learning systems.
    • Systematicity may be fostered by deriving a quantitative measure for it which can then be optimised in objective functions, and through careful attention to how training sets are constructed.
    • Such systematicity can facilitate the acquisition of conceptual/object-level representations, which improve intelligibility and intelligence.
    • Hypothesis: Creative generation of outputs can be facilitated by a GAN-like architecture that favours outputs that require effort (but not too much effort) to be predicted/compressed, together with an adaptive predictor/compressor.
    • Hypothesis: Systems that learn higher-order transformation on their parameter sets, and the relationships between these operators and lower-order performance, will be able to learn complex relationships quickly (and perhaps intelligibly).
  • Expectation-based models of experiential content
    • The non-conceptual content of perceptual experience can be understood in terms of the possession of an array of expectations concerning how performing each of a set of actions would transform sensory input.
    • Broad similarities to Perceptual Processing/Coding models, but predates them and differs in some key respects
    • Proper depiction of these expectational arrays can assist in synthetic phenomenology (see Non-conceptual content for AI intelligibility)
  • Machine consciousness: Qualia
    • The problem of qualia, widely believed to be the major impediment to physicalist accounts of consciousness, can be resolved by embracing a revisionist view of qualia.
    • Contra eliminativists, the term “qualia” might in fact refer to something real and physical; contra dualists, the referent would lack the problematic properties many typically believe qualia to have (intrinsicness, privacy, immediacy, ineffability, etc).
    • Building a system with qualia, then (beyond building a system that has the “easy”, uncontentiously functionally understandable aspects of consciousness), is a matter of giving machines the same kinds of states that, in us, dispose us to talk of qualitative states, to which we are disposed to attribute those problematic properties, etc.
  • Non-conceptual content for (AI) intelligibility
    • A key difficulty in understanding/predicting an AI (especially Deep Learning) system derives from the non-human modes of representation that such systems employ
    • Managing representational schemes that are not expressive in terms of our conceptual schemes is a focus of my research into non-conceptual content (NCC)
    • NCC theory suggests techniques for rigorously referring to and making intelligible “alien” representational contents
    • In fact, machine models of cognition/experience may assist in providing such techniques: what I call “synthetic phenomenology"
    • A good way to understand how an AI/robot is representing the world is in terms of the embodied expectations it has for how its actions might transform its inputs (see Machine consciousness: Expectation-based models of experiential content)
    • Another strategy to achieve intelligibility is to bias learning algorithms in favour of solutions that are interpretable by us (see Machine Learning: Systematicity, Creativity and Metacognition)
  • Deflationary AI ethics
    • Slogan: We shouldn’t strive to design ethical robots, but rather strive to design robots ethically
    • Agents in the foreseeable future will not have responsibility
    • However, externalism: the informational states that play a role in allocating human responsibility may partly supervene on AI systems
    • The main difference from the ethics of other technologies is not artificial agency, but the complexity/unforseeability of the behaviour of the artificial system
    • This unforseeability may mitigated by new techniques for AI intelligibility (see section)
  • Embodiment (and AI)
    • The role of the body (physical realisation, relation to environment, temporality, etc.) is essential to understanding natural and artificial intelligence.
    • However, this does not support an anti-computational view of cognition, since these same aspects of embodiment are, ultimately, required to understand computational systems in general (see Philosophy of Computation).
  • Philosophy of Computation and Quantum Computation
    • Orthodox characterisations of computation are inadequate
    • A better account is one that sees computational as essentially intentional/semantic, which renders computational many properties typically viewed as mere implementation detail.
    • The result is a notion of computation that can dismiss some key arguments against its role in cognition, and against the possibility of AI in general.
    • Hypothesis: A machine running a program and a person executing the same set of instructions (ala Searle’s Chinese Room) are not in fact computationally equivalent, and there is reason to believe the latter does not possess the right kind of intentionality-blind counterfactuals to be the realiser of a mind.
    • Thus, the following fail: Searle’s Chinese Room argument, Otto-style arguments for extended mind, Block’s Chinese nation/group minds, et al.
    • Also: published the earliest known paper on quantum neural networks that learn.
  • AI: General Issues
  • Consciousness: General Issues
  • Philosophy of Language