Data Science Research Group

Data science seminars

Seminars are held in Chichester 2R203 on Thursdays starting at 14:00, unless otherwise noted.

Directions to the University can be found here. The seminar room is located near label 25 on the campus map, on level 2.

Spring 2018

March
22
2018

Towards Understanding Representation Learning Systems: An Experimental Approach

Vicente Ivan Sanchez Carmona and Sebastian Riedel (UCL)

Representation learning systems are dedicated to learning a representation of the data in a form suitable to be used by other machine learning models, such as classifiers. However, some of these systems and the representations learned are difficult to interpret. Thus, figuring out if certain knowledge has been learned, such as hypernymy, or understanding why/how a particular prediction has been made are difficult tasks. In this talk, we will focus on three analyses that will let us better understand aspects such as those described above for particular representation learning systems and embeddings, namely GloVe embeddings, the ESIM system for the task of natural language understanding, and the popular Model F for the task of knowledge base population. The particular questions to be addressed are: Have GloVe embeddings learned hypernymy? How robust is the behavior of ESIM to challenging instances and what factors influence its predictions? How can we explain predictions of Model F? This talk is based on work published in EACL 2017, NAACL 2018 (to appear), and AAAI Spring Symposium 2015.

April
17
2018

Imitation Learning, Zero-shot Learning and Automated Fact Checking

Andreas Vlachos (Sheffield)

This seminar will take place on Tuesday 17th April 3-4pm in the CALPS lab

In this talk I will give an overview of my research in machine learning for natural language processing. I will begin by introducing my work on imitation learning, a machine learning paradigm I have used to develop novel algorithms for structure prediction that have been applied successfully to a number of tasks such as semantic parsing, natural language generation and information extraction. Key advantages are the ability to handle large output search spaces and to learn with non-decomposable loss functions. Following this, I will discuss my work on zero-shot learning using neural networks, which enabled us to learn models that can predict labels for which no data was observed during training. I will conclude with my work on automated fact-checking, a challenge we proposed in order to stimulate progress in machine learning, natural language processing and, more broadly, artificial intelligence.

April
26
2018

Neural Models for CCG Supertagging

Steve Clark (Cambridge and Deep Mind)

In this talk I will describe how recurrent neural networks can be applied successfully to the CCG supertagging problem. The use of RNNs leads to substantial accuracy gains over previous feature-based taggers using maximum entropy models. In fact, the accuracy of an LSTM supertagger is higher than that of the Clark and Curran CCG parser, when evaluated solely on supertagging accuracy. I will then describe the work of Mike Lewis and colleagues, who has shown how a simple parsing algorithm, on top of an LSTM supertagger, can lead to highly accurate and efficient CCG parsing, substantiating the original claim of Bangalore and Joshi that supertagging can be thought of as 'almost parsing'.

The archive of previous seminars goes back to late 1996.