What does it mean to say a word has several meanings? On what grounds do lexicographers make their judgments about the number of meanings a word has? How do the senses a dictionary lists relate to the full range of ways a word might get used? How might NLP systems deal with multiple meanings? These are the questions the thesis addresses. The `Bank Model' of lexical ambiguity, in which polysemy is treated as homonymy, is shown to be flawed. Words do not in general have a finite number of discrete meanings which an ideal dictionary would list. A word has, in addition to its dictionary senses, an indefinite range of extended uses. The lexicographer describes only the uses which occur reasonably frequently and are not entirely predictable from the word's core meanings. Polysemy is not a natural kind. It describes the crossroads between homonymy, collocation, analogy and alternation. (An alternation is a pattern in which a number of words share the same relationship between pairs of usage-types.) Any non-basic type of use for a word can be treated as belonging in one of these four camps. For computational lexicography, putative polysemous senses should be represented in the lexicon as homonyms, or within collocations, or, implicitly, as the outcome of applying an alternation. Uses in the `analogy' camp will not be described in the lexicon. As others have argued, a concise description of lexical knowledge must be inheritance-based. It must be possible to state and inherit generalisations. Within such a lexicon, regularly polysemous and other predictable usage-types can be described concisely. The thesis presents two fragments of the lexicon in which various alternations and other aspects of lexical structure are given concise formal treatments in an inheritance-based lexical representation language.
Download compressed postscript file