At the moment, I am dedicating myself to Ethical Machine Learning (supported by EPSRC EP/P03442X/1 and Amazon AWS Cloud Credits for Research). The long-term goal of the project is to develop a computational framework with plug-and-play constraints that is able to handle fairness, transparency (interpretability+providing decision explanations), and confidentiality constraints, their combinations, and also new constraints that might be stipulated in the future.

My research focuses on addressing Big Data challenges in the context of machine learning. My contributions include streaming algorithms for learning to utilise massive un-labelled training data, and for tiering search engine indices that can process billions of webpages in seconds.

I have also contributed in non-standard learning settings such as:

  1. learning from only label proportions,
  2. learning from several related tasks with distinct label sets, and
  3. inferring bipartite matching with unobserved edge potentials.

Most recently, I focus on nonparametric Bayesian models which allow flexible modelling of complex Internet data.

I have also worked in the field of computer vision where I look at:

  1. parametric or non-parametric approach for augmenting semantic attributes representations,
  2. learning to utilise privileged information, and
  3. learning from multiple datasets