Learning to Understand Phrases by Embedding the Dictionary Felix Hill Computer Laboratory University of Cambridge felix.hill@cl.cam.ac.uk Kyunghyun Cho∗ Courant Institute of Mathematical Sciences and Centre for Data Science New York University kyunghyun.cho@nyu.edu Anna Korhonen Department of Theoretical and Applied Linguistics University of Cambridge alk23@cam.ac.uk Yoshua Bengio CIFAR Senior Fellow Université de Montréal yoshua.bengio@umontreal.ca Abstract Distributional models that learn rich seman- tic word representations are a success story of recent NLP research. However, develop- ing models that learn useful representations of phrases and sentences has proved far harder. We propose using the definitions found in everyday dictionaries as a means of bridg- ing this gap between lexical and phrasal se- mantics. Neural language embedding mod- els can be effectively trained to map dictio- nary definitions (phrases) to (lexical) repre- sentations of the words defined by those defi- nitions. We present two applications of these architectures: reverse dictionaries that return the name of a concept given a definition or description and general-knowledge crossword question answerers. On both tasks, neural lan- guage embedding models trained on defini- tions from a handful of freely-available lex- ical resources perform as well or better than existing commercial systems that rely on sig- nificant task-specific engineering. The re- sults highlight the effectiveness of both neu- ral embedding architectures and definition- based training for developing models that un- derstand phrases and sentences. 1 Introduction Much recent research in computational seman- tics has focussed on learning representations of arbitrary-length phrases and sentences. This task is challenging partly because there is no obvious gold standard of phrasal representation that could be used ∗ Work mainly done at the University of Montreal. in training and evaluation. Consequently, it is diffi- cult to design approaches that could learn from such a gold standard, and also hard to evaluate or compare different models. In this work, we use dictionary definitions to ad- dress this issue. The composed meaning of the words in a dictionary definition (a tall, long-necked, spotted ruminant of Africa) should correspond to the meaning of the word they define (giraffe). This bridge between lexical and phrasal semantics is use- ful because high quality vector representations of single words can be used as a target when learning to combine the words into a coherent phrasal repre- sentation. This approach still requires a model capable of learning to map between arbitrary-length phrases and fixed-length continuous-valued word vectors. For this purpose we experiment with two broad classes of neural language models (NLMs): Recur- rent Neural Networks (RNNs), which naturally en- code the order of input words, and simpler (feed- forward) bag-of-words (BOW) embedding models. Prior to training these NLMs, we learn target lexi- cal representations by training the Word2Vec soft- ware (Mikolov et al., 2013) on billions of words of raw text. We demonstrate the usefulness of our approach by building and releasing two applications. The first is a reverse dictionary or concept finder: a system that returns words based on user descriptions or defini- tions (Zock and Bilac, 2004). Reverse dictionaries are used by copywriters, novelists, translators and other professional writers to find words for notions or ideas that might be on the tip of their tongue. 17 Transactions of the Association for Computational Linguistics, vol. 4, pp. 17–30, 2016. Action Editor: Chris Callison-Burch. Submission batch: 9/2015; revised 12/2015; revised 1/2016; Published 2/2016. c©2016 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license. For instance, a travel-writer might look to enhance her prose by searching for examples of a country that people associate with warm weather or an ac- tivity that is mentally or physically demanding. We show that an NLM-based reverse dictionary trained on only a handful of dictionaries identifies novel def- initions and concept descriptions comparably or bet- ter than commercial systems, which rely on signif- icant task-specific engineering and access to much more dictionary data. Moreover, by exploiting mod- els that learn bilingual word representations (Vulic et al., 2011; Klementiev et al., 2012; Hermann and Blunsom, 2013; Gouws et al., 2014), we show that the NLM approach can be easily extended to pro- duce a potentially useful cross-lingual reverse dic- tionary. The second application of our models is as a general-knowledge crossword question answerer. When trained on both dictionary definitions and the opening sentences of Wikipedia articles, NLMs pro- duce plausible answers to (non-cryptic) crossword clues, even those that apparently require detailed world knowledge. Both BOW and RNN models can outperform bespoke commercial crossword solvers, particularly when clues contain a greater number of words. Qualitative analysis reveals that NLMs can learn to relate concepts that are not directly con- nected in the training data and can thus generalise well to unseen input. To facilitate further research, all of our code, training and evaluation sets (together with a system demo) are published online with this paper.1 2 Neural Language Model Architectures The first model we apply to the dictionary-based learning task is a recurrent neural network (RNN). RNNs operate on variable-length sequences of in- puts; in our case, natural language definitions, de- scriptions or sentences. RNNs (with LSTMs) have achieved state-of-the-art performance in language modelling (Mikolov et al., 2010), image caption generation (Kiros et al., 2015) and approach state- of-the-art performance in machine translation (Bah- danau et al., 2015). During training, the input to the RNN is a dic- tionary definition or sentence from an encyclopedia. 1 https://www.cl.cam.ac.uk/˜fh295/ The objective of the model is to map these defin- ing phrases or sentences to an embedding of the word that the definition defines. The target word embeddings are learned independently of the RNN weights, using the Word2Vec software (Mikolov et al., 2013). The set of all words in the training data consti- tutes the vocabulary of the RNN. For each word in this vocabulary we randomly initialise a real-valued vector (input embedding) of model parameters. The RNN ‘reads’ the first word in the input by applying a non-linear projection of its embedding v1 parame- terised by input weight matrix W and b, a vector of biases. A1 = φ(Wv1 + b) yielding the first internal activation state A1. In our implementation, we use φ(x) = tanh(x), though in theory φ can be any differentiable non-linear func- tion. Subsequent internal activations (after time-step t) are computed by projecting the embedding of the tth word and using this information to ‘update’ the internal activation state. At = φ(UAt−1 + Wvt + b). As such, the values of the final internal activation state units AN are a weighted function of all input word embeddings, and constitute a ‘summary’ of the information in the sentence. 2.1 Long Short Term Memory A known limitation when training RNNs to read lan- guage using gradient descent is that the error sig- nal (gradient) on the training examples either van- ishes or explodes as the number of time steps (sen- tence length) increases (Bengio et al., 1994). Conse- quently, after reading longer sentences the final in- ternal activation AN typically retains useful infor- mation about the most recently read (sentence-final) words, but can neglect important information near the start of the input sentence. LSTMs (Hochreiter and Schmidhuber, 1997) were designed to mitigate this long-term dependency problem. At each time step t, in place of the single inter- nal layer of units A, the LSTM RNN computes six internal layers iw,gi,gf,go,h and m. The first, gw, represents the core information passed to the LSTM 18 https://www.cl.cam.ac.uk/~fh295/ unit by the latest input word at t. It is computed as a simple linear projection of the input embedding vt (by input weights Ww) and the output state of the LSTM at the previous time step ht−1 (by update weights Uw): iwt = Wwvt + Uwht−1 + bw The layers gi,gf and go are computed as weighted sigmoid functions of the input embeddings, again parameterised by layer-specific weight matrices W and U: gst = 1 1 + exp(−(Wsvt + Usht−1 + bs)) where s stands for one of i,f or o. These vectors take values on [0,1] and are often referred to as gat- ing activations. Finally, the internal memory state, mt and new output state ht, of the LSTM at t are computed as mt =i w t �git + mt−1 �g f t ht =g o t �φ(mt), where � indicates elementwise vector multiplica- tion and φ is, as before, some non-linear function (we use tanh). Thus, gi determines to what extent the new input word is considered at each time step, gf determines to what extent the existing state of the internal memory is retained or forgotten in com- puting the new internal memory, and go determines how much this memory is considered when comput- ing the output state at t. The sentence-final memory state of the LSTM, mN , a ‘summary’ of all the information in the sen- tence, is then projected via an extra non-linear pro- jection (parameterised by a further weight matrix) to a target embedding space. This layer enables the target (defined) word embedding space to take a dif- ferent dimension to the activation layers of the RNN, and in principle enables a more complex definition- reading function to be learned. 2.2 Bag-of-Words NLMs We implement a simpler linear bag-of-words (BOW) architecture for encoding the definition phrases. As with the RNN, this architecture learns an embedding vi for each word in the model vocabulary, together with a single matrix of input projection weights W . The BOW model simply maps an input definition with word embeddings v1 . . .vn to the sum of the projected embeddings ∑n i=1 Wvi. This model can also be considered a special case of an RNN in which the update function U and nonlinearity φ are both the identity, so that ‘reading’ the next word in the input phrase updates the current representation more simply: At = At−1 + Wvt. 2.3 Pre-trained Input Representations We experiment with variants of these models in which the input definition embeddings are pre- learned and fixed (rather than randomly-initialised and updated) during training. There are several po- tential advantages to taking this approach. First, the word embeddings are trained on massive corpora and may therefore introduce additional linguistic or conceptual knowledge to the models. Second, at test time, the models will have a larger effective vocab- ulary, since the pre-trained word embeddings typi- cally span a larger vocabulary than the union of all dictionary definitions used to train the model. Fi- nally, the models will then map to and from the same space of embeddings (the embedding space will be closed under the operation of the model), so con- ceivably could be more easily applied as a general- purpose ‘composition engine’. 2.4 Training Objective We train all neural language models M to map the input definition phrase sc defining word c to a lo- cation close to the the pre-trained embedding vc of c. We experiment with two different cost functions for the word-phrase pair (c,sc) from the training data. The first is simply the cosine distance between M(sc) and vc. The second is the rank loss max(0,m− cos(M(sc),vc)−cos(M(sc),vr)) where vr is the embedding of a randomly-selected word from the vocabulary other than c. This loss function was used for language models, for example, in (Huang et al., 2012). In all experiments we apply a margin m = 0.1, which has been shown to work well on word-retrieval tasks (Bordes et al., 2015). 19 2.5 Implementation Details Since training on the dictionary data took 6-10 hours, we did not conduct a hyper-parameter search on any validation sets over the space of possible model configurations such as embedding dimension, or size of hidden layers. Instead, we chose these parameters to be as standard as possible based on previous research. For fair comparison, any aspects of model design that are not specific to a particu- lar class of model were kept constant across experi- ments. The pre-trained word embeddings used in all of our models (either as input or target) were learned by a continuous bag-of-words (CBOW) model using the Word2Vec software on approximately 8 billion words of running text.2 When training such models on massive corpora, a large embedding length of up to 700 have been shown to yield best performance (see e.g. (Faruqui et al., 2014)). The pre-trained em- beddings used in our models were of length 500, as a compromise between quality and memory con- straints. In cases where the word embeddings are learned during training on the dictionary objective, we make these embeddings shorter (256), since they must be learned from much less language data. In the RNN models, and at each time step each of the four LSTM RNN internal layers (gating and activa- tion states) had length 512 – another standard choice (see e.g. (Cho et al., 2014)). The final hidden state was mapped linearly to length 500, the dimension of the target embedding. In the BOW models, the projection matrix projects input embeddings (either learned, of length 256, or pre-trained, of length 500) to length 500 for summing. All models were implemented with Theano (Bergstra et al., 2010) and trained with minibatch SGD on GPUs. The batch size was fixed at 16 and the learning rate was controlled by adadelta (Zeiler, 2012). 2The Word2Vec embedding models are well known; further details can be found at https://code.google.com/p/ word2vec/ The training data for this pre-training was com- piled from various online text sources using the script demo- train-big-model-v1.sh from the same page. 3 Reverse Dictionaries The most immediate application of our trained mod- els is as a reverse dictionary or concept finder. It is simple to look up a definition in a dictionary given a word, but professional writers often also re- quire suitable words for a given idea, concept or definition.3 Reverse dictionaries satisfy this need by returning candidate words given a phrase, de- scription or definition. For instance, when queried with the phrase an activity that requires strength and determination, the OneLook.com reverse dictio- nary returns the concepts exercise and work. Our trained RNN model can perform a similar func- tion, simply by mapping a phrase to a point in the target (Word2Vec) embedding space, and returning the words corresponding to the embeddings that are closest to that point. Several other academic studies have proposed reverse dictionary models. These generally rely on common techniques from information retrieval, comparing definitions in their internal database to the input query, and returning the word whose def- inition is ‘closest’ to that query (Bilac et al., 2003; Bilac et al., 2004; Zock and Bilac, 2004). Proxim- ity is quantified differently in each case, but is gen- erally a function of hand-engineered features of the two sentences. For instance, Shaw et al. (2013) pro- pose a method in which the candidates for a given input query are all words in the model’s database whose definitions contain one or more words from the query. This candidate list is then ranked accord- ing to a query-definition similarity metric based on the hypernym and hyponym relations in WordNet, features commonly used in IR such as tf-idf and a parser. There are, in addition, at least two commercial online reverse dictionary applications, whose ar- chitecture is proprietary knowledge. The first is the Dictionary.com reverse dictionary 4, which re- trieves candidate words from the Dictionary.com dictionary based on user definitions or descrip- tions. The second is OneLook.com, whose algo- rithm searches 1061 indexed dictionaries, including 3See the testimony from professional writers at http:// www.onelook.com/?c=awards 4Available at http://dictionary.reference. com/reverse/ 20 https://code.google.com/p/word2vec/ https://code.google.com/p/word2vec/ http://www.onelook.com/?c=awards http://www.onelook.com/?c=awards http://dictionary.reference.com/reverse/ http://dictionary.reference.com/reverse/ all major freely-available online dictionaries and re- sources such as Wikipedia and WordNet. 3.1 Data Collection and Training To compile a bank of dictionary definitions for train- ing the model, we started with all words in the tar- get embedding space. For each of these words, we extracted dictionary-style definitions from five elec- tronic resources: Wordnet, The American Heritage Dictionary, The Collaborative International Dictio- nary of English, Wiktionary and Webster’s. We chose these five dictionaries because they are freely- available via the WordNik API,5 but in theory any dictionary could be chosen. Most words in our train- ing data had multiple definitions. For each word w with definitions {d1 . . .dn} we included all pairs (w,d1) . . .(w,dn) as training examples. To allow models access to more factual knowl- edge than might be present in a dictionary (for in- stance, information about specific entities, places or people, we supplemented this training data with in- formation extracted from Simple Wikipedia. 6 For every word in the model’s target embedding space that is also the title of a Wikipedia article, we treat the sentences in the first paragraph of the article as if they were (independent) definitions of that word. When a word in Wikipedia also occurs in one (or more) of the five training dictionaries, we simply add these pseudo-definitions to the training set of definitions for the word. Combining Wikipedia and dictionaries in this way resulted in ≈ 900,000 word- ’definition’ pairs of ≈ 100,000 unique words. To explore the effect of the quantity of training data on the performance of the models, we also trained models on subsets of this data. The first sub- set comprised only definitions from Wordnet (ap- proximately 150,000 definitions of 75,000 words). The second subset comprised only words in Word- net and their first definitions (approximately 75,000 word, definition pairs).7. For all variants of RNN and BOW models, however, reducing the training data in this way resulted in a clear reduction in per- 5See http://developer.wordnik.com 6https://simple.wikipedia.org/wiki/Main_ Page 7As with other dictionaries, the first definition in WordNet generally corresponds to the most typical or common sense of a word. formance on all tasks. For brevity, we therefore do not present these results in what follows. 3.2 Comparisons As a baseline, we also implemented two entirely unsupervised methods using the neural (Word2Vec) word embeddings from the target word space. In the first (W2V add), we compose the embeddings for each word in the input query by pointwise addition, and return as candidates the nearest word embed- dings to the resulting composed vector.8 The sec- ond baseline, (W2V mult), is identical except that the embeddings are composed by elementwise mul- tiplication. Both methods are established ways of building phrase representations from word embed- dings (Mitchell and Lapata, 2010). None of the models or evaluations from previous academic research on reverse dictionaries is pub- licly available, so direct comparison is not possi- ble. However, we do compare performance with the commercial systems. The Dictionary.com sys- tem returned no candidates for over 96% of our in- put definitions. We therefore conduct detailed com- parison with OneLook.com, which is the first re- verse dictionary tool returned by a Google search and seems to be the most popular among writers. 3.3 Reverse Dictionary Evaluation To our knowledge there are no established means of measuring reverse dictionary performance. In the only previous academic research on English reverse dictionaries that we are aware of, evaluation was conducted on 300 word-definition pairs written by lexicographers (Shaw et al., 2013). Since these are not publicly available we developed new evaluation sets and make them freely available for future eval- uations. The evaluation items are of three types, designed to test different properties of the models. To cre- ate the seen evaluation, we randomly selected 500 words from the WordNet training data (seen by all models), and then randomly selected a definition for each word. Testing models on the resulting 500 word-definition pairs assesses their ability to recall or decode previously encoded information. For the 8Since we retrieve all answers from embedding spaces by cosine similarity, addition of word embeddings is equivalent to taking the mean. 21 http://developer.wordnik.com https://simple.wikipedia.org/wiki/Main_Page https://simple.wikipedia.org/wiki/Main_Page Dictionary definitions Test Set Seen (500 WN defs) Unseen (500 WN defs) Concept descriptions (200) Unsup. W2V add - - - 923 .04/.16 163 339 .07/.30 150 models W2V mult - - - 1000 .00/.00 10* 1000 .00/.00 27* OneLook 0 .89/.91 67 - - - 18.5 .38/.58 153 RNN cosine 12 .48/.73 103 22 .41/.70 116 69 .28/.54 157 RNN w2v cosine 19 .44/.70 111 19 .44/.69 126 26 .38/.66 111 RNN ranking 18 .45/.67 128 24 .43/.69 103 25 .34/.66 102 NLMs RNN w2v ranking 54 .32/.56 155 33 .36/.65 137 30 .33/.69 77 BOW cosine 22 .44/.65 129 19 .43/.69 103 50 .34/.60 99 BOW w2v cosine 15 .46/.71 124 14 .46/ .71 104 28 .36/.66 99 BOW ranking 17 .45/.68 115 22 .42/.70 95 32 .35/.69 101 BOW w2v rankng 55 .32/.56 155 36 .35/.66 138 38 .33/.72 85 median rank accuracy@10/100 rank variance Table 1: Performance of different reverse dictionary models in different evaluation settings. *Low variance in mult models is due to consistently poor scores, so not highlighted. unseen evaluation, we randomly selected 500 words from WordNet and excluded all definitions of these words from the training data of all models. Finally, for a fair comparison with OneLook, which has both the seen and unseen pairs in its in- ternal database, we built a new dataset of concept descriptions that do not appear in the training data for any model. To do so, we randomly selected 200 adjectives, nouns or verbs from among the top 3000 most frequent tokens in the British National Cor- pus (Leech et al., 1994) (but outside the top 100). We then asked ten native English speakers to write a single-sentence ‘description’ of these words. To ensure the resulting descriptions were good qual- ity, for each description we asked two participants who did not produce that description to list any words that fitted the description (up to a maximum of three). If the target word was not produced by one of the two checkers, the original participant was asked to re-write the description until the validation was passed.9 These concept descriptions, together with other evaluation sets, can be downloaded from our website for future comparisons. Given a test description, definition, or question, all models produce a ranking of possible word an- swers based on the proximity of their representations of the input phrase and all possible output words. To quantify the quality of a given ranking, we re- port three statistics: the median rank of the correct 9Re-writing was required in 6 of the 200 cases. Test set Word Description Dictionary valve ”control consisting of a mechanical definition device for controlling fluid flow” Concept prefer ”when you like one thing description more than another thing” Table 2: Style difference between dictionary definitions and concept descriptions in the evaluation. answer (over the whole test set, lower better), the proportion of training cases in which the correct an- swer appears in the top 10/100 in this ranking (accu- racy@10/100 - higher better) and the variance of the rank of the correct answer across the test set (rank variance - lower better). 3.4 Results Table 1 shows the performance of the different mod- els in the three evaluation settings. Of the unsu- pervised composition models, elementwise addition is clearly more effective than multiplication, which almost never returns the correct word as the near- est neighbour of the composition. Overall, however, the supervised models (RNN, BOW and OneLook) clearly outperform these baselines. The results indicate interesting differences be- tween the NLMs and the OneLook dictionary search engine. The Seen (WN first) definitions in Table 1 occur in both the training data for the NLMs and the lookup data for the OneLook model. Clearly the OneLook algorithm is better than NLMs at retriev- 22 ing already available information (returning 89% of correct words among the top-ten candidates on this set). However, this is likely to come at the cost of a greater memory footprint, since the model requires access to its database of dictionaries at query time.10 The performance of the NLM embedding models on the (unseen) concept descriptions task shows that these models can generalise well to novel, unseen queries. While the median rank for OneLook on this evaluation is lower, the NLMs retrieve the cor- rect answer in the top ten candidates approximately as frequently, within the top 100 candidates more frequently and with lower variance in ranking over the test set. Thus, NLMs seem to generalise more ‘consistenly’ than OneLook on this dataset, in that they generally assign a reasonably high ranking to the correct word. In contrast, as can also be verified by querying our we demo, OneLook tends to per- form either very well or poorly on a given query.11 When comparing between NLMs, perhaps the most striking observation is that the RNN models do not significantly outperform the BOW models, even though the BOW model output is invariant to changes in the order of words in the definition. Users of the online demo can verify that the BOW models recover concepts from descriptions strikingly well, even when the words in the description are per- muted. This observation underlines the importance of lexical semantics in the interpretation of language by NLMs, and is consistent with some other recent work on embedding sentences (Iyyer et al., 2015). It is difficult to observe clear trends in the dif- ferences between NLMs that learn input word em- beddings and those with pre-trained (Word2Vec) in- put embeddings. Both types of input yield good performance in some situations and weaker perfor- mance in others. In general, pre-training input em- beddings seems to help most on the concept de- scriptions, which are furthest from the training data in terms of linguistic style. This is perhaps unsur- prising, since models that learn input embeddings from the dictionary data acquire all of their concep- 10The trained neural language models are approximately half the size of the six training dictionaries stored as plain text, so would be hundreds of times smaller than the OneLook database of 1061 dictionaries if stored this way. 11We also observed that the mean ranking for NLMs was lower than for OneLook on the concept descriptions task. tual knowledge from this data (and thus may over- fit to this setting), whereas models with pre-trained embeddings have some semantic memory acquired from general running-text language data and other knowledge acquired from the dictionaries. 3.5 Qualitative Analysis Some example output from the various models is presented in Table 3. The differences illustrated here are also evident from querying the web demo. The first example shows how the NLMs (BOW and RNN) generalise beyond their training data. Four of the top five responses could be classed as ap- propriate in that they refer to inhabitants of cold countries. However, inspecting the WordNik train- ing data, there is no mention of cold or anything to do with climate in the definitions of Eskimo, Scandi- navian, Scandinavia etc. Therefore, the embedding models must have learned that coldness is a char- acteristic of Scandinavia, Siberia, Russia, relates to Eskimos etc. via connections with other concepts that are described or defined as cold. In contrast, the candidates produced by the OneLook and (unsu- pervised) W2V baseline models have nothing to do with coldness. The second example demonstrates how the NLMs generally return candidates whose linguistic or con- ceptual function is appropriate to the query. For a query referring explicitly to a means, method or pro- cess, the RNN and BOW models produce verbs in different forms or an appropriate deverbal noun. In contrast, OneLook returns words of all types (aero- dynamics, draught) that are arbitrarily related to the words in the query. A similar effect is apparent in the third example. While the candidates produced by the OneLook model are the correct part of speech (Noun), and related to the query topic, they are not semantically appropriate. The dictionary embedding models are the only ones that return a list of plausi- ble habits, the class of noun requested by the input. 3.6 Cross-Lingual Reverse Dictionaries We now show how the RNN architecture can be eas- ily modified to create a bilingual reverse dictionary - a system that returns candidate words in one lan- guage given a description or definition in another. A bilingual reverse dictionary could have clear ap- plications for translators or transcribers. Indeed, the 23 Input Description OneLook W2V add RNN BOW ”a native of 1:country 2:citizen 1:a 2.the 1:eskimo 2:scandinavian 1:frigid 2:cold a cold 3:foreign 4:naturalize 3:another 4:of 3:arctic 4:indian 3:icy 4:russian country” 5:cisco 5:whole 5:siberian 5:indian ”a way of 1:drag 2:whiz 1:the 2:through 1:glide 2:scooting 1:flying 2:gliding moving 3:aerodynamics 4:draught 3:a 4:moving 3:glides 4:gliding 3:glide 4:fly through 5:coefficient of drag 5:in 5:flight 5:scooting the air” ”a habit that 1:sisterinlaw 2:fatherinlaw 1:annoy 2:your 1:bossiness 2:jealousy 1:infidelity 2:bossiness might annoy 3:motherinlaw 4:stepson 3:might 4:that 3:annoyance 4:rudeness 3:foible 4:unfaithfulness your spouse” 5:stepchild 5:either 5:boorishness 5:adulterous Table 3: The top-five candidates for example queries (invented by the authors) from different reverse dictionary mod- els. Both the RNN and BOW models are without Word2Vec input and use the cosine loss. Input description RNN EN-FR W2V add RNN + Google ”an emotion that you might feel triste, pitoyable insister, effectivement sentiment, regretter after being rejected” répugnante, épouvantable pourquoi, nous peur, aversion ”a small black flying insect that mouche, canard attentivement, pouvions voler, faucon transmits disease and likes horses” hirondelle, pigeon pourrons, naturellement mouches, volant Table 4: Responses from cross-lingual reverse dictionary models to selected queries. Underlined responses are ‘cor- rect’ or potentially useful for a native French speaker. problem of attaching appropriate words to concepts may be more common when searching for words in a second language than in a monolingual context. To create the bilingual variant, we simply replace the Word2Vec target embeddings with those from a bilingual embedding space. Bilingual embedding models use bilingual corpora to learn a space of rep- resentations of the words in two languages, such that words from either language that have similar meanings are close together (Hermann and Blun- som, 2013; Chandar et al., 2014; Gouws et al., 2014). For a test-of-concept experiment, we used English-French embeddings learned by the state-of- the-art BilBOWA model (Gouws et al., 2014) from the Wikipedia (monolingual) and Europarl (bilin- gual) corpora.12 We trained the RNN model to map from English definitions to English words in the bilingual space. At test time, after reading an En- glish definition, we then simply return the nearest French word neighbours to that definition. Because no benchmarks exist for quantitative 12The approach should work with any bilingual embeddings. We thank Stephan Gouws for doing the training. evaluation of bilingual reverse dictionaries, we com- pare this approach qualitatively with two alternative methods for mapping definitions to words across languages. The first is analogous to the W2V Add model of the previous section: in the bilingual em- bedding space, we first compose the embeddings of the English words in the query definition with ele- mentwise addition, and then return the French word whose embedding is nearest to this vector sum. The second uses the RNN monolingual reverse dictio- nary model to identify an English word from an En- glish definition, and then translates that word using Google Translate. Table 4 shows that the RNN model can be ef- fectively modified to create a cross-lingual reverse dictionary. It is perhaps unsurprising that the W2V Add model candidates are generally the lowest in quality given the performance of the method in the monolingual setting. In comparing the two RNN- based methods, the RNN (embedding space) model appears to have two advantages over the RNN + Google approach. First, it does not require on- line access to a bilingual word-word mapping as 24 defined e.g. by Google Translate. Second, it less prone to errors caused by word sense ambiguity. For example, in response to the query an emotion you feel after being rejected, the bilingual embed- ding RNN returns emotions or adjectives describing mental states. In contrast, the monolingual+Google model incorrectly maps the plausible English re- sponse regret to the verbal infinitive regretter. The model makes the same error when responding to a description of a fly, returning the verb voler (to fly). 3.7 Discussion We have shown that simply training RNN or BOW NLMs on six dictionaries yields a reverse dictionary that performs comparably to the leading commer- cial system, even with access to much less dictio- nary data. Indeed, the embedding models consis- tently return syntactically and semantically plausi- ble responses, which are generally part of a more coherent and homogeneous set of candidates than those produced by the commercial systems. We also showed how the architecture can be easily extended to produce bilingual versions of the same model. In the analyses performed thus far, we only test the dictionary embedding approach on tasks that it was trained to accomplish (mapping definitions or descriptions to words). In the next section, we ex- plore whether the knowledge learned by dictionary embedding models can be effectively transferred to a novel task. 4 General Knowledge (crossword) Question Answering The automatic answering of questions posed in nat- ural language is a central problem of Artificial In- telligence. Although web search and IR techniques provide a means to find sites or documents related to language queries, at present, internet users requiring a specific fact must still sift through pages to locate the desired information. Systems that attempt to overcome this, via fully open-domain or general knowledge question- answering (open QA), generally require large teams of researchers, modular design and powerful infras- tructure, exemplified by IBM’s Watson (Ferrucci et al., 2010). For this reason, much academic re- search focuses on settings in which the scope of the task is reduced. This has been achieved by restrict- ing questions to a specific topic or domain (Mollá and Vicedo, 2007), allowing systems access to pre- specified passages of text from which the answer can be inferred (Iyyer et al., 2014; Weston et al., 2015), or centering both questions and answers on a partic- ular knowledge base (Berant and Liang, 2014; Bor- des et al., 2014). In what follows, we show that the dictionary em- bedding models introduced in the previous sections may form a useful component of an open QA sys- tem. Given the absence of a knowledge base or web-scale information in our architecture, we nar- row the scope of the task by focusing on general knowledge crossword questions. General knowl- edge (non-cryptic, or quick) crosswords appear in national newspapers in many countries. Crossword question answering is more tractable than general open QA for two reasons. First, models know the length of the correct answer (in letters), reducing the search space. Second, some crossword questions mirror definitions, in that they refer to fundamental properties of concepts (a twelve-sided shape) or re- quest a category member (a city in Egypt).13 4.1 Evaluation General Knowledge crossword questions come in different styles and forms. We used the Eddie James crossword website to compile a bank of sentence- like general-knowledge questions.14 Eddie James is one of the UK’s leading crossword compilers, work- ing for several national newspapers. Our long ques- tion set consists of the first 150 questions (starting from puzzle #1) from his general-knowledge cross- words, excluding clues of fewer than four words and those whose answer was not a single word (e.g. kingjames). To evaluate models on a different type of clue, we also compiled a set of shorter questions based on the Guardian Quick Crossword. Guardian questions still require general factual or linguistic knowledge, but are generally shorter and somewhat more cryptic than the longer Eddie James clues. We again formed 13As our interest is in the language understanding, we do not address the question of fitting answers into a grid, which is the main concern of end-to-end automated crossword solvers (Littman et al., 2002). 14http://www.eddiejames.co.uk/ 25 http://www.eddiejames.co.uk/ a list of 150 questions, beginning on 1 January 2015 and excluding any questions with multiple-word an- swers. For clear contrast, we excluded those few questions of length greater than four words. Of these 150 clues, a subset of 30 were single-word clues. All evaluation datasets are available online with the paper. As with the reverse dictionary experiments, can- didates are extracted from models by inputting def- initions and returning words corresponding to the closest embeddings in the target space. In this case, however, we only consider candidate words whose length matches the length specified in the clue. Test set Word Description Long Baudelaire ”French poet (150) and key figure in the development of Symbolism.” Short (120) satanist ”devil devotee” Single-Word (30) guilt ”culpability” Table 5: Examples of the different question types in the crossword question evaluation dataset. 4.2 Benchmarks and Comparisons As with the reverse dictionary experiments, we com- pare RNN and BOW NLMs with a simple unsuper- vised baseline of elementwise addition of Word2Vec vectors in the embedding space (we discard the in- effective W2V mult baseline), again restricting can- didates to words of the pre-specified length. We also compare to two bespoke online crossword- solving engines. The first, One Across (http: //www.oneacross.com/) is the candidate gen- eration module of the award-winning Proverb cross- word system (Littman et al., 2002). Proverb, which was produced by academic researchers, has featured in national media such as New Scientist, and beaten expert humans in crossword solving tournaments. The second comparison is with Crossword Maestro (http://www.crosswordmaestro.com/), a commercial crossword solving system that handles both cryptic and non-cryptic crossword clues (we focus only on the non-cryptic setting), and has also been featured in national media.15 We are unable 15 See e.g. http://www.theguardian.com/ crosswords/crossword-blog/2012/mar/08/ to compare against a third well-known automatic crossword solver, Dr Fill (Ginsberg, 2011), because code for Dr Fill’s candidate-generation module is not readily available. As with the RNN and base- line models, when evaluating existing systems we discard candidates whose length does not match the length specified in the clue. Certain principles connect the design of the ex- isting commercial systems and differentiate them from our approach. Unlike the NLMs, they each re- quire query-time access to large databases contain- ing common crossword clues, dictionary definitions, the frequency with which words typically appear as crossword solutions and other hand-engineered and task-specific components (Littman et al., 2002; Ginsberg, 2011). 4.3 Results The performance of models on the various question types is presented in Table 6. When evaluating the two commercial systems, One Across and Cross- word Maestro, we have access to web interfaces that return up to approximately 100 candidates for each query, so can only reliably record membership of the top ten (accuracy@10). On the long questions, we observe a clear advan- tage for all dictionary embedding models over the commercial systems and the simple unsupervised baseline. Here, the best performing NLM (RNN with Word2Vec input embeddings and ranking loss) ranks the correct answer third on average, and in the top-ten candidates over 60% of the time. As the questions get shorter, the advantage of the embedding models diminishes. Both the unsu- pervised baseline and One Across answer the short questions with comparable accuracy to the RNN and BOW models. One reason for this may be the differ- ence in form and style between the shorter clues and the full definitions or encyclopedia sentences in the dictionary training data. As the length of the clue de- creases, finding the answer often reduces to generat- ing synonyms (culpability - guilt), or category mem- bers (tall animal - giraffe). The commercial systems can retrieve good candidates for such clues among their databases of entities, relationships and com- mon crossword answers. Unsupervised Word2Vec crossword-blog-computers-crack-cryptic- clues 26 http://www.oneacross.com/ http://www.oneacross.com/ http://www.crosswordmaestro.com/ http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/2012/mar/08/crossword-blog-computers-crack-cryptic-clues Question Type avg rank -accuracy@10/100 - rank variance Long (150) Short (120) Single-Word (30) One Across .39 / .68 / .70 / Crossword Maestro .27 / .43 / .73 / W2V add 42 .31/.63 92 11 .50/.78 66 2 .79/.90 45 RNN cosine 15 .43/.69 108 22 .39/.67 117 72 .31/.52 187 RNN w2v cosine 4 .61/.82 60 7 .56/.79 60 12 .48/.72 116 RNN ranking 6 .58/.84 48 10 .51/.73 57 12 .48/.69 67 RNN w2v ranking 3 .62/.80 61 8 .57/.78 49 12 .48/.69 114 BOW cosine 4 .60/.82 54 7 .56/.78 51 12 .45/.72 137 BOW w2v cosine 4 .60/.83 56 7 .54/.80 48 3 .59/.79 111 BOW ranking 5 .62/.87 50 8 .58/.83 37 8 .55/.79 39 BOW w2v ranking 5 .60/.86 48 8 .56/.83 35 4 .55/.83 43 Table 6: Performance of different models on crossword questions of different length. The two commercial systems are evaluated via their web interface so only accuracy@10 can be reported in those cases. representations are also known to encode these sorts of relationships (even after elementwise addition for short sequences of words) (Mikolov et al., 2013). This would also explain why the dictionary embed- ding models with pre-trained (Word2Vec) input em- beddings outperfom those with learned embeddings, particularly for the shortest questions. 4.4 Qualitative Analysis A better understanding of how the different models arrive at their answers can be gained from consider- ing specific examples, as presented in Table 7. The first three examples show that, despite the apparently superficial nature of its training data (definitions and introductory sentences) embedding models can an- swer questions that require factual knowledge about people and places. Another notable characteristic of these model is the consistent semantic appropriate- ness of the candidate set. In the first case, the top five candidates are all mountains, valleys or places in the Alps; in the second, they are all biblical names. In the third, the RNN model retrieves currencies, in this case performing better than the BOW model, which retrieves entities of various type associated with the Netherlands. Generally speaking (as can be observed by the web demo), the ‘smoothness’ or consistency in candidate generation of the dictionary embedding models is greater than that of the com- mercial systems. Despite its simplicity, the unsuper- vised W2V addition method is at times also surpris- ingly effective, as shown by the fact that it returns Joshua in its top candidates for the third query. The final example in Table 7 illustrates the sur- prising power of the BOW model. In the training data there is a single definition for the correct an- swer Schoenberg: United States composer and musi- cal theorist (born in Austria) who developed atonal composition. The only word common to both the query and the definition is ’composer’ (there is no tokenization that allows the BOW model to directly connect atonal and atonality). Nevertheless, the model is able to infer the necessary connections be- tween the concepts in the query and the definition to return Schoenberg as the top candidate. Despite such cases, it remains an open question whether, with more diverse training data, the world knowledge required for full open QA (e.g. sec- ondary facts about Schoenberg, such as his fam- ily) could be encoded and retained as weights in a (larger) dynamic network, or whether it will be nec- essary to combine the RNN with an external mem- ory that is less frequently (or never) updated. This latter approach has begun to achieve impressive re- sults on certain QA and entailment tasks (Bordes et al., 2014; Graves et al., 2014; Weston et al., 2015). 5 Conclusion Dictionaries exist in many of the world’s languages. We have shown how these lexical resources can con- stitute valuable data for training the latest neural lan- guage models to interpret and represent the mean- ing of phrases and sentences. While humans use 27 Input Description One Across Crossword Maestro BOW RNN ”Swiss mountain 1:noted 2:front 1:after 2:favor 1:Eiger 2.Crags 1:Eiger 2:Aosta peak famed for its 3:Eiger 4:crown 3:ahead 4:along 3:Teton 4:Cerro 3:Cuneo 4:Lecco north face (5)” 5:fount 5:being 5:Jebel 5:Tyrol ”Old Testament 1:Joshua 2:Exodus 1:devise 2:Daniel 1:Isaiah 2:Elijah 1:Joshua 2:Isaiah successor to 3:Hebrew 4:person 3:Haggai 4: Isaiah 3:Joshua 4:Elisha 3:Gideon 4:Elijah Moses (6)” 5:across 5:Joseph 5:Yahweh 5:Yahweh ”The former 1:Holland 2:general 1:Holland 2:ancient 1:Guilder 2:Holland 1:Guilder 2:Escudos currency of the 3:Lesotho 3:earlier 4:onetime 3:Drenthe 4:Utrecht 3:Pesetas 4:Someren Netherlands 5:qondam 5:Naarden 5:Florins (7)” ”Arnold, 20th 1:surrealism 1:disharmony 1:Schoenberg 1:Mendelsohn Century composer 2:laborparty 2:dissonance 2:Christleib 2:Williamson pioneer of 3:tonemusics 3:bringabout 3:Stravinsky 3:Huddleston atonality 4:introduced 4:constitute 4:Elderfield 4:Mandelbaum (10)” 5:Schoenberg 5:triggeroff 5:Mendelsohn 5:Zimmerman Table 7: Responses from different models to example crossword clues. In each case the model output is filtered to exclude any candidates that are not of the same length as the correct answer. BOW and RNN models are trained without Word2Vec input embeddings and cosine loss. the phrasal definitions in dictionaries to better un- derstand the meaning of words, machines can use the words to better understand the phrases. We used two dictionary embedding architectures - a recurrent neural network architecture with a long-short-term memory, and a simpler linear bag-of-words model - to explicitly exploit this idea. On the reverse dictionary task that mirrors its training setting, NLMs that embed all known con- cepts in a continuous-valued vector space perform comparably to the best known commercial applica- tions despite having access to many fewer defini- tions. Moreover, they generate smoother sets of can- didates and require no linguistic pre-processing or task-specific engineering. We also showed how the description-to-word objective can be used to train models useful for other tasks. NLMs trained on the same data can answer general-knowledge crossword questions, and indeed outperform commercial sys- tems on questions containing more than four words. While our QA experiments focused on crosswords, the results suggest that a similar embedding-based approach may ultimately lead to improved output from more general QA and dialog systems and in- formation retrieval engines in general. We make all code, training data, evaluation sets and both of our linguistic tools publicly available on- line for future research. In particular, we propose the reverse dictionary task as a comparatively general- purpose and objective way of evaluating how well models compose lexical meaning into phrase or sen- tence representations (whether or not they involve training on definitions directly). In the next stage of this research, we will ex- plore ways to enhance the NLMs described here, especially in the question-answering context. The models are currently not trained on any question- like language, and would conceivably improve on exposure to such linguistic forms. We would also like to understand better how BOW models can per- form so well with no ‘awareness’ of word order, and whether there are specific linguistic contexts in which models like RNNs or others with the power to encode word order are indeed necessary. Finally, we intend to explore ways to endow the model with richer world knowledge. This may require the in- tegration of an external memory module, similar to the promising approaches proposed in several recent papers (Graves et al., 2014; Weston et al., 2015). Acknowledgments KC and YB acknowledge the support of the follow- ing organizations: NSERC, Calcul Québec, Com- pute Canada, the Canada Research Chairs and CI- 28 FAR. FH and AK were supported by Google Faculty Research Award, and AK further by Google Euro- pean Fellowship. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceeding of ICLR. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transac- tions on, 5(2):157–166. Jonathan Berant and Percy Liang. 2014. Semantic pars- ing via paraphrasing. In Proceedings of the Associa- tion for Computational Linguistics. James Bergstra, Olivier Breuleux, Frédéric Bastien, Pas- cal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Ben- gio. 2010. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy). Slaven Bilac, Timothy Baldwin, and Hozumi Tanaka. 2003. Improving dictionary accessibility by maximiz- ing use of available knowledge. Traitement Automa- tique des Langues, 44(2):199–224. Slaven Bilac, Wataru Watanabe, Taiichi Hashimoto, Takenobu Tokunaga, and Hozumi Tanaka. 2004. Dic- tionary search based on the target word description. In Proceedings of NLP 2014. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. Pro- ceedings of EMNLP. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075. Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C. Raykar, and Amrita Saha. 2014. An autoencoder ap- proach to learning bilingual word representations. In Advances in Neural Information Processing Systems, pages 1853–1861. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statis- tical machine translation. In Proceedings of EMNLP. Manaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2014. Retrofitting word vectors to semantic lexicons. Pro- ceedings of the North American Chapter of the Asso- ciation for Computational Linguistics. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A. Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, and Chris Welty. 2010. Building Wat- son: An overview of the DeepQA project. In AI mag- azine, volume 31(3), pages 59–79. Matthew L. Ginsberg. 2011. Dr. FILL: Crosswords and an implemented solver for singly weighted CSPs. In Journal of Artificial Intelligence Research, pages 851– 886. Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2014. BilBOWA: Fast bilingual distributed represen- tations without word alignments. In Proceedings of NIPS Deep Learning Workshop. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401. Karl Moritz Hermann and Phil Blunsom. 2013. Multi- lingual distributed representations without word align- ment. In Proceedings of ICLR. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735– 1780. Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representa- tions via global context and multiple word prototypes. In Proceedings of the Association for Computational Linguistics. Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daumé III. 2014. A neu- ral network for factoid question answering over para- graphs. In Proceedings of EMNLP. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daumé III. 2015. Deep unordered compo- sition rivals syntactic methods for text classification. In Proceedings of the Association for Computational Linguistics. Ryan Kiros, Ruslan Salakhutdinov, and Richard S. Zemel. 2015. Unifying visual-semantic embeddings with multimodal neural language models. Transac- tions of the Association for Computational Linguistics. to appear. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representa- tions of words. Proceedings of COLING. Geoffrey Leech, Roger Garside, and Michael Bryant. 1994. CLAWS4: The tagging of the British National Corpus. In Proceedings of COLING. Michael L. Littman, Greg A. Keim, and Noam Shazeer. 2002. A probabilistic approach to solving crossword puzzles. Artificial Intelligence, 134(1):23–55. Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cer- nockỳ, and Sanjeev Khudanpur. 2010. Recurrent neu- 29 ral network based language model. In Proceedings of INTERSPEECH 2010. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388–1429. Diego Mollá and José Luis Vicedo. 2007. Question an- swering in restricted domains: An overview. Compu- tational Linguistics, 33(1):41–61. Ryan Shaw, Anindya Datta, Debra VanderMeer, and Kaushik Dutta. 2013. Building a scalable database- driven reverse dictionary. Knowledge and Data Engi- neering, IEEE Transactions on, 25(3):528–540. Ivan Vulic, Wim De Smet, and Marie-Francine Moens. 2011. Identifying word translations from comparable corpora using latent topic models. In Proceedings of the Association for Computational Linguistics. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards AI-complete question answering: A set of prerequisite toy tasks. In arXiv preprint arXiv:1502.05698. Matthew D. Zeiler. 2012. Adadelta: An adaptive learn- ing rate method. In arXiv preprint arXiv:1212.5701. Michael Zock and Slaven Bilac. 2004. Word lookup on the basis of associations: From an idea to a roadmap. In Proceedings of the ACL Workshop on Enhancing and Using Electronic Dictionaries. 30