Data Science Research Group

Data science seminars

Seminars are held in Chichester 2R203 on Thursdays starting at 14:00, unless otherwise noted.

Directions to the University can be found here. The seminar room is located near label 25 on the campus map, on level 2.

Summer 2017

June
15
2017

Inferring Unobserved Co-occurrence Events in Anchored Packed Trees

Thomas Kober (Sussex)

Distributional models are derived from co-occurrences in a corpus, where only a small proportion of all possible plausible co-occurrences will be observed. This problem is amplified for models like Anchored Packed Trees (APTs), that take the grammatical type of a co-occurrence into account. This results in a very sparse distributional space, requiring a mechanism for inferring missing knowledge. Most methods face this challenge in ways that render the resulting word representations uninterpretable, with the consequence that semantic composition becomes hard to model. In this talk, I explore an alternative which involves explicitly inferring unobserved co-occurrences using the distributional neighbourhood, that exploits the rich type structure in APTs and infers missing data by the same mechanims that is used for semantic composition. I show that distributional inference improves performance on several word similarity benchmarks and achieves state-of-the-art performance for two short-phrase composition benchmarks.  

June
29
2017

Fourteen Components of Creativity

Bill Keller (Sussex)

Many commonsense concepts may be characterised as “essentially contested”, in that they resist a straightforward, universally agreed on interpretation or realisation. Where such concepts have a role in scientific enquiry, this can be problematic. A case in point is the concept of creativity, which encompasses a variety of related aspects, abilities, properties and behaviours. This poses problems for the study of creative practice generally and computational creativity in particular, where a tractable and well-articulated model of the concept is needed for the purpose of evaluation. This talk describes joint work with Anna Jordanous (University of Kent) on a novel approach to developing a model of the notion of creativity. Statistical language processing techniques were applied to the analysis of a corpus of academic papers on the topic. Words that appeared significantly often in connection with the concept were identified and then clustered to discern a number of key components. The components provide a set of 'building blocks' for creativity that have been used to model and evaluate creative practice. 

July
6
2017

Two Case Studies in Lexical Semantics: Hypernym Detection and Antonym Generation

Laura Rimell (Cambridge)

Detection of lexical semantic relations such as hypernymy, antonymy, meronymy, etc. using distributed word representations has practical use in many NLP applications, and success (or failure) in relation detection offers a better understanding of commonly used representations. An extension of the relation detection task, generation of word pairs governed by a lexical relation, has the further potential to improve natural language generation. In this talk I will address two approaches to lexical relations, one for hypernym detection and the other for antonym generation. Hypernym detection in distributed spaces has typically been based on a notion of substitutability: co-occurrence contexts of a hyponym (e.g. 'lion') are assumed to be valid contexts for its hypernym (e.g. 'animal'). However, this assumption often fails. I will discuss an alternative approach that considers the top features in a sparse context vector as a topic, and introduce an entailment measure based on topic coherence which can be used for multi-way relation classification. Turning to antonymy, previous work has focused on learning word representations that incorporate antonymy as part of the objective. Instead, I describe how we can learn a mapping that predicts antonyms of adjectives in an arbitrary word embedding model. I will introduce a continuous class-conditional bilinear neural network, inspired by relation detection networks used in computer vision, which gates the input word vector using information about the semantic domain, and is able to predict antonyms with high precision.  

The archive of previous seminars goes back to late 1996.