Document Modeling with Convolutional-Gated Recurrent Neural Network for Sentiment Classification Duyu Tang, Bing Qin and Ting Liu

Document level sentiment classification remains a challenge: encoding the intrinsic relations between sentences in the semantic meaning of a document. To address this, we introduce the Convolutional-Gated Recurrent Neural Network (C-GRNN), which learns vector-based document representation in a unified, bottom-up fashion. C-GRNN first models sentence representation with convolutional neural network. Afterwards, semantics of sentences and their relations are adaptively encoded in document representation with gated recurrent neural network. We apply C-GRNN to document level sentiment classification and conduct experiments on four large-scale review datasets from IMDB and Yelp. Experimental results show that: (1) C-GRNN shows superior performances over several state-of-the-art algorithms; (2) gated recurrent neural network dramatically outperforms standard recurrent neural network in document modeling.

System Combination for Multi-document Summarization Kai Hong and Ani Nenkova

We present a novel framework of system combination for multi-document summarization. For each input set (input), we generate candidate summaries by combining the summaries from different systems on the sentence level. We show that the oracle among these candidates is much better than the systems that we have combined. We then present a supervised model to select among the candidates. The model relies on a rich set of features that capture content importance from different perspectives. Our model performs better than the systems that we have combined, based on automatic and manual evaluations. Our model also achieves a performance comparable to the state-of-the-art on six DUC/TAC datasets.

When Are Tree Structures Necessary for Deep Learning of Representations? Jiwei Li, Thang Luong, Dan Jurafsky and Eduard Hovy

Recursive neural models, which use syntactic parse trees to recursively generate representations bottom-up from parse tree children, are a popular new architecture, promising to capture structural properties like long-distance semantic dependencies. But understanding exactly which tasks this parse-based method is appropriate for remains an open question. In this paper we benchmark recursive neural models against sequential recurrent neural models, which are structured solely on word sequences. We investigate 5 tasks: sentiment classification at (1) sentence level (2) phrase level (3) matching questions to answer-phrases; (4) discourse parsing; (5) computing semantic relations (e.g., component-whole between nouns). We implement basic versions of recursive and recurrent models and apply them to each task. Our analysis suggests that syntactic tree-based recursive models are helpful for tasks that require representing long-distance relations between words (e.g., semantic relations between nominals), but may not be helpful in other situations, where sequence based recurrent models can produce equal performance. Our results offer insights on the design of neural architectures for representation learning.

Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei-Hao Su, David Vandyke and Steve Young

Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages. This paper presents a statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure. The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. An objective evaluation in two differing test domains showed improved performance compared to previous methods with fewer heuristics. Human judges scored the LSTM system higher on informativeness and naturalness and overall preferred it to the other systems.

Detecting Risks in the Banking System by Sentiment Analysis Clemens Nopp and Allan Hanbury

In November 2014, the European Central Bank (ECB) started to directly supervise the largest banks in the Eurozone via the Single Supervisory Mechanism (SSM). While supervisory risk assessments are usually based on quantitative data and surveys, this work explores whether sentiment analysis is capable of measuring a bank's attitude and opinions towards risk by analyzing text data. For realizing this study, a collection consisting of more than 500 CEO letters and outlook sections extracted from bank annual reports is built up. Based on these data, two distinct experiments are conducted. The evaluations find promising opportunities, but also limitations for risk sentiment analysis in banking supervision. At the level of individual banks, predictions are relatively inaccurate. In contrast, the analysis of aggregated figures revealed strong and significant correlations between uncertainty or negativity in textual disclosures and the quantitative risk indicator's future evolution. Risk sentiment analysis should therefore rather be used for macroprudential analyses than for assessments of individual banks.

Cross Lingual Sentiment Analysis using Modified BRAE Sarthak Jain and Shashank Batra

Cross-Lingual Learning provides a mechanism to adapt NLP tools available for label rich languages to achieve similar tasks for label-scarce languages. An efficient cross-lingual tool significantly reduces the cost and effort required to manually annotate data. In this paper, we use the Recursive Autoencoder architecture to develop a Cross Lingual Sentiment Analysis tool using sentence aligned corpora between a pair of resource rich (English) and resource poor(Hindi) language. The resulting system is analyzed on a newly developed Movie Reviews Dataset in Hindi with labels given on a rating scale and compare performance of our system against existing systems. It is shown that our approach significantly outperforms state of the art systems for Sentiment Analysis, especially when labeled data is scarce.

Hashtag Recommendation Using Dirichlet Process Mixture Models Incorporating Types of Hashtags Qi Zhang, Yeyun Gong and Xuanjing Huang

In recent years, the task of recommending hashtags for microblogs has been given increasing attention. Various methods have been proposed to study the problem from different aspects. However, most of the recent studies have not considered the differences in the types or uses of hashtags. In this paper, we introduce a novel nonparametric Bayesian method for this task. Based on the Dirichlet Process Mixture Models (DPMM), we incorporate the type of hashtag as a hidden variable. The results of experiments on the data collected from a real world microblogging service demonstrate that the proposed method outperforms state-of-the-art methods that do not consider these aspects. With taking these aspects into consideration, the relative improvement of the proposed method over the state-of-the-art methods is around 12.2% in F1-score.

ERSOM: A Structural Ontology Matching Approach Using Automatically Learned Entity Representation Chuncheng Xiang, Baobao Chang and Zhifang Sui

As a key representation model of knowledge, ontology has been widely used in a lot of NLP related tasks, such as semantic parsing, information extraction and text mining etc. In this paper, we study the task of ontology matching, which concentrates on finding semantically related entities between different ontologies that describe the same domain, to solve the semantic heterogeneity problem. Previous works exploit different kinds of descriptions of an entity in ontology directly and separately to find the correspondences without considering the higher level correlations between the descriptions. Besides, the structural information of ontology haven't been utilized adequately for ontology matching. We propose in this paper an ontology matching approach, named ERSOM, which mainly includes an unsupervised representation learning method based on the deep neural networks to learn the general representation of the entities and an iterative similarity propagation method that takes advantage of more abundant structure information of the ontology to discover more mappings.

How Much Information Does a Human Translator Add to the Original? Barret Zoph, Marjan Ghazvininejad and Kevin Knight

We ask how much information a human translator adds to an original text, and we provide a bound. We address this question in the context of bilingual text compression: given a source text, how many bits of additional information are required to specify the target text produced by a human translator? We develop new compression algorithms and establish a benchmark task.

Biography-Dependent Collaborative Entity Archiving for Slot Filling Yu Hong, Xiaobin Wang, Yadong Chen, Jian Wang, Tongtao Zhang and Heng Ji

Current studies on Knowledge Base Population (KBP) tasks, such as slot filling, show the particular importance of entity-oriented automatic relevant document acquisition. Richer, diverse and reliable relevant documents satisfy the fundamental requirement that a KBP system explores the attributes of an entity, such as provenance-based background knowledge extraction (e.g., a person's religion, origin, etc.). Towards the bottleneck problem between comprehensiveness and definiteness of acquisition, we propose a fuzzy-to-exact matching based collaborative archiving method. In particular we introduce topic modeling methodologies into entity profiling, so as to build a bridge between fuzzy and exact matching. On one side of the bridge, we employ the topics in a small-scale set of high-quality relevant documents (i.e., exact matching results) to summarize the life slices of a target entity (i.e., so-called biography). On the other side, we use the biography as a reliable reference material to detect new truly relevant documents from a large-scale semi-finished pseudo-feedback (i.e., fuzzy matching results). We leverage the archiving method in state-of-the-art slot filling systems. Experiments on TAC-KBP data show significant improvement.

Monotone Submodularity in Opinion Summaries Jayanth Jayanth, Jayaprakash Sundararaj and Pushpak Bhattacharyya

We propose a set of submodular functions for opinion summarization. Opinion summarization has built in it the tasks of summarization and sentiment detection. However, it is not easy to detect sentiment and simultaneously extract summary. The two tasks conflict in the sense that the demand of compression may drop sentiment bearing sentences, and the demand of sentiment detection may bring in redundant sentences. However, using submodularity we show how to strike a balance between the two requirements. We investigate a new class of submodular functions for the problem, and a partial enumeration based greedy algorithm that has performance guarantee of 63%. Our functions generate summaries such that there is good correlation between document sentiment and summary sentiment along with good ROUGE score, which outperforms the-state-of-the-art algorithms.

Do we need bigram alignment models? On the effect of alignment quality on transduction accuracy in G2P Steffen Eger

We investigate the need for bigram alignment models and the benefit of supervised alignment techniques in G2P. Moreover, we quantitatively estimate the relationship between alignment quality and overall G2P system performance. We find that, in English, bigram alignment models do perform better than unigram alignment models on the G2P task. Moreover, we find that supervised alignment techniques may perform considerably better than their unsupervised brethren and that few manually aligned training pairs suffice for them to do so. Finally, we estimate a highly significant impact of alignment quality on overall G2P transcription performance and that this relationship is linear in nature.

Sentence Compression by Deletion with LSTMs Katja Filippova, Enrique Alfonseca, Carlos A. Colmenares, Lukasz Kaiser and Oriol Vinyals

We present an LSTM approach to deletion-based sentence compression where the task is to translate a sentence into a sequence of zeros and ones, corresponding to token deletion decisions. We demonstrate that even the most basic version of the system, which is given no syntactic information (no PoS or NE tags, or dependencies) or desired compression length, performs surprisingly well: around 30% of the compressions from a large test set could be regenerated. We compare the LSTM system with a competitive baseline which is trained on the same amount of data but is additionally provided with all kinds of linguistic features. In an experiment with human raters the LSTM-based model outperforms the baseline achieving 4.5 in readability and 3.8 in informativeness.

CORE: Context-Aware Open Relation Extraction with Factorization Machines Fabio Petroni, Luciano Del Corro and Rainer Gemulla

We propose CORE, a novel matrix factorization model that leverages contextual information for open relation extraction. Our model is based on factorization machines and integrates facts from various sources, such as knowledge bases or open information extractors, as well as the context in which these facts have been observed. We argue that integrating contextual information---such as metadata about extraction sources, lexical context, or type information---significantly improves prediction performance. Open information extractors, for example, may produce extractions that are unspecific or ambiguous when taken out of context. Our experimental study on a large real-world dataset indicates that CORE has significantly better prediction performance than state-of-the-art approaches when contextual information is available.

Identifying Political Sentiment between Nation States with Social Media Nathanael Chambers

This paper describes a new model and application of sentiment analysis for the social sciences. The goal is to model relations between nation states with social media. Many cross-disciplinary applications of NLP involve making predictions (such as predicting political elections), but this paper instead focuses on a model that is applicable to political science analysis. Do citizens express opinions in line with their home country's formal relations? When opinions diverge over time, what is the cause and can social media serve to detect these changes? We propose several learning algorithms to study how the populace of a country discusses foreign nations on Twitter, ranging from bootstrap learning of irrelevant tweets to state-of-the-art contextual sentiment analysis. We evaluate on standard sentiment evaluations, but we also show strong correlations with two public opinion polls and current international alliance relationships. We conclude with some political science use cases.

Language and Domain Independent Entity Linking with Quantified Collective Validation Han Wang, Jin Guang Zheng, Xiaogang Ma, Peter Fox and Heng Ji

Linking named mentions detected in a source document to an existing knowledge base provides disambiguated entity referents for the mentions. This allows better document analysis, knowledge extraction and knowledge base population. Most of the previous research extensively exploited the linguistic features of the source documents in a supervised or semi-supervised way. These systems therefore cannot be easily applied to a new language or domain. In this paper, we present a novel unsupervised algorithm named Quantified Collective Validation that avoids excessive linguistic analysis on the source documents and fully leverages the knowledge base structure for the entity linking task. We show our approach achieves state-of-the-art English entity linking performance and demonstrate successful deployment in new languages (Chinese) and new domains (Biomedical and Earth Science).

Parsing English into Abstract Meaning Representation Using Syntax-Based Machine Translation Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu and Jonathan May

We present a parser for Abstract Meaning Representation (AMR). We treat English-to-AMR conversion within the framework of string-to-tree, syntax-based machine translation (SBMT). To make this work, we transform the AMR structure into a form suitable for the mechanics of SBMT and useful for modeling. We introduce an AMR-specific language model and add data and features drawn from semantic resources. Our resulting AMR parser significantly improves upon state-of-the-art results.

A Utility Model of Authors in the Scientific Community Yanchuan Sim, Bryan Routledge and Noah A. Smith

Authoring a scientific paper is a complex process involving many decisions. We introduce a probabilistic model of some of the important aspects of that process: that authors have individual preferences, that writing a paper requires trading off among the preferences of authors as well as extrinsic rewards in the form of community response to their papers, that preferences (of individuals and the community) and tradeoffs vary over time. Variants of our model lead to improved predictive accuracy of citations given texts and texts given authors. Further, our model's posterior suggests an interesting relationship between seniority and author choices.

Improved Arabic Dialect Classification on Social Media Data Fei Huang

Arabic dialect classification has been an important and challenging problem for Arabic language processing, especially for social media text analysis and machine translation. In this paper we propose an approach to improving Arabic dialect classification with semi-supervised learning: multiple classifiers are trained with weakly supervised, strongly supervised, and unsupervised data. Their combination yields significant and consistent improvement on two different test sets. The dialect classification accuracy is improved by 5% over the strongly supervised classifier and 20% over the weakly supervised classifier. Furthermore, when applying the improved dialect classifier to build a Modern Standard Arabic (MSA) language model (LM), the new model size is reduced by 70% while the English-Arabic translation quality is improved by 0.6 Bleu point.

Modeling Relation Paths for Representation Learning of Knowledge Bases Yankai Lin, Zhiyuan Liu and Maosong Sun

Representation learning of knowledge bases (KBs) aims to embed both entities and relations into a low-dimensional space. Most existing methods only consider direct relations in representation learning. We argue that multiple-step relation paths also contain rich inference patterns between entities, and propose a path-based representation learning model. This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, we design a path-constraint resource allocation algorithm to measure the reliability of relation paths. (2) We represent relation paths via semantic composition of relation embeddings. Experimental results on real-world datasets show that, as compared with baselines, our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text.

Phrase-based Compressive Cross-Language Summarization Jin-ge Yao, Xiaojun Wan and Jianguo Xiao

The task of cross-language document summarization is to create a summary in a target language from documents in a different source language. Previous methods only involve direct extraction of automatically translated sentences from the original documents. In this work we propose a phrase-based model to simultaneously perform sentence scoring, extraction and compression. We design a greedy algorithm to approximately optimize the score function. Experimental results show that our methods outperform the state-of-the-art extractive systems while maintaining similar grammatical quality.

An Empirical Comparison Between N-gram and Syntactic Language Models for Word Ordering Jiangming Liu and Yue Zhang

Syntactic language models and N-gram language models have both been used in word ordering. In this paper, we give an empirical comparison between N-gram and syntactic language models on word order task. Our results show that the quality of automatically-parsed training data has a relatively small impact on syntactic models. Both of syntactic and N-gram models can benefit from large-scale raw text. Compared with N-gram models, syntactic models give overall better performance, but they require much more training time. In addition, the two models lead to different error distributions in word ordering. A combination of the two models integrates the advantages of each model, achieving the best result in a standard benchmark.

Multilingual discriminative lexicalized phrase structure parsing Benoit Crabbé

We provide a generalization of discriminative lexicalized shift reduce parsing techniques for phrase structure grammar to a wide range of morphologically rich languages. The model is efficient and outperforms recent strong baselines on almost all languages considered. It takes advantage of a dependency based modelling of morphology and a shallow modelling of constituency boundaries.

All the Right Reasons: Semi-supervised Argumentation Mining in User-generated Web Discourse Ivan Habernal and Iryna Gurevych

Analyzing arguments in user-generated Web discourse has recently gained attention in argumentation mining, an evolving field of NLP. Current approaches, which employ fully-supervised machine learning, are usually domain dependent and suffer from the lack of large and diverse annotated corpora. However, annotating arguments in discourse is costly, error-prone, and highly context-dependent. We asked whether leveraging unlabeled data in a semi-supervised manner can boost the performance of argument component identification and to which extent is the approach independent of domain and register. We propose novel features that exploit clustering of unlabeled data from debate portals based on a word embeddings representation. Using these features, we significantly outperform several strong baselines in the cross-validation, cross-domain, and cross-register evaluation scenarios.

Learning Semantic Representations for Nonterminals in Hierarchical Phrase-Based Translation Xing Wang and Deyi Xiong

In hierarchical phrase-based translation, coarse-grained nonterminal Xs may generate inappropriate translations due to the lack of sufficient information for phrasal substitution. In this paper we propose a framework to refine nonterminals in hierarchical translation rules with realvalued semantic representations. The semantic representations are learned via a weighted mean value and a minimum distance method using phrase vector representations obtained from large scale monolingual corpus. Based on the learned semantic vectors, we build a semantic nonterminal refinement model to measure semantic similarities between phrasal substitutions and nonterminal Xs in translation rules. Experiment results on Chinese-English translation show that the proposed model significantly improves translation quality on NIST test sets.

Topic Identification and Discovery on Text and Speech Chandler May, Francis Ferraro, Alan McCree, Jonathan Wintrode, Daniel Garcia-Romero and Benjamin Van Durme

We compare the multinomial i-vector framework from the speech community with LDA, SAGE, and LSA as feature learners for topic ID on multinomial text and speech data. We also compare the learned representations in their ability to discover topics, quantified by distributional similarity to gold-standard topics and by human interpretability. We find that topic ID and topic discovery are competing objectives. We argue that LSA and i-vectors should be more widely considered by the text processing community as pre-processing steps for downstream tasks, and also speculate about speech processing tasks that could benefit from more interpretable representations like SAGE.

Sentiment Flow - A General Model of Web Review Argumentation Henning Wachsmuth, Johannes Kiesel and Benno Stein

Web reviews have been intensively studied in argumentation-related tasks such as sentiment analysis. However, due to their focus on content-based features, many sentiment analysis approaches are effective only for reviews from those domains they have been specifically modeled for. This paper puts its focus on domain independence and asks whether a general model can be found for how people argue in web reviews. Our hypothesis is that people express their global sentiment on a topic with similar sequences of local sentiment independent of the domain. We model such sentiment flow robustly under uncertainty through abstraction. To test our hypothesis, we predict global sentiment based on sentiment flow. In systematic experiments, we improve over the domain independence of strong baselines. Our findings suggest that sentiment flow qualifies as a general model of web review argumentation.

Better Document-level Sentiment Analysis from RST Discourse Parsing Parminder Bhatia, Yangfeng Ji and Jacob Eisenstein

Discourse structure has long been thought to have the potential improve the prediction of document-level labels, such as sentiment polarity. We present successful applications of Rhetorical Structure Theory (RST) to document-level sentiment analysis, via composition of local information up the discourse tree. First, we show that RST offers substantial improvements in lexicon-based sentiment analysis, via a reweighting of discourse units according to their position in a dependency representation of the rhetorical structure. Next, we present a recursive neural network over the RST structure, which offers significant improvements over classification-based sentiment polarity analysis.

Language Understanding for Text-based Games Using Deep Reinforcement Learning Karthik Narasimhan, Tejas Kulkarni and Regina Barzilay

In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against a baseline with a bag-of-words state representation. Our algorithm outperforms the baseline on quest completion by 54% on a newly created world and by 14% on a pre-existing fantasy game.

Open Extraction of Fine-Grained Political Opinion David Bamman and Noah A. Smith

Text data has recently been used as evidence in estimating the political ideologies of individuals, including political elites and social media users. While inferences about people are often the intrinsic quantity of interest, we draw inspiration from open information extraction to identify a new task: inferring the political import of propositions like "Obama is a Socialist." We present several models that exploit the structure that exists between people and the assertions they make to learn latent positions of people and propositions at the same time, and we evaluate them on a novel dataset of propositions judged on a political spectrum.

A Transition-based Model for Joint Segmentation, POS-tagging and Normalization Tao Qian, Yue Zhang, Meishan Zhang and Donghong JI

Two central challenges of text normalization on Chinese Microtext are the error propagation from word segmentation and the lack of annotated corpora. Inspired by the joint model of word segmentation and POS tagging, we propose a transition-based joint model of word segmentation, POS tagging and text normalization. The model can be trained on standard text corpora, overcoming the lack of annotated Microtext corpora. To evaluate our model, we develop an annotated corpus based on Microtext. Experimental results show that our joint model can help improve the performance of word segmentation on Microtext, giving an error reduction in segmentation accuracy of 22.49%, compared to the traditional approach.

Feature-Rich Two-Stage Logistic Regression for Monolingual Alignment Md Arafat Sultan, Steven Bethard and Tamara Sumner

Monolingual alignment is the task of pairing semantically similar units from two pieces of text. We report a top-performing supervised aligner that operates on short text snippets. We employ a large feature set to (1) encode similarities among semantic units (words and named entities) in context, and (2) address cooperation and competition for alignment among units in the same snippet. These features are deployed in a two-stage logistic regression framework for alignment. On two benchmark data sets, our aligner achieves F1 scores of 92.1% and 88.5%, with statistically significant error reductions of 4.8% and 7.3% over the previous best aligner. It produces top results in extrinsic evaluation as well.

Hierarchical Back-off Modeling of Hiero Grammar based on Non-parametric Bayesian Model Hidetaka Kamigaito, Taro Watanabe, Hiroya Takamura, Manabu Okumura and Eiichiro Sumita

In hierarchical phrase-based machine translation, a rule table is automatically learned by heuristically extracting synchronous rules from a parallel corpus. As a result, spuriously many rules are extracted which may be composed of various incorrect rules. The larger rule table incurs more run time for decoding and may result in lower translation quality. To resolve the problems, we propose a hierarchical back-off model for Hiero grammar, an instance of a synchronous context free grammar (SCFG), on the basis of the hierarchical Pitman-Yor process. The model can extract a compact rule and phrase table without resorting to any heuristics by hierarchically backing off to smaller phrases under SCFG. Inference is efficiently carried out using two-step synchronous parsing of Xiao et al., (2012) combined with slice sampling. In our experiments, the proposed model achieved higher translation quality than a previous Bayesian model measured using BLEU on various language pairs; Germany/French/Spanish/Japanese-to-English.

Input Method Logs as Natural Annotations for Word Segmentation Fumihiko Takahasi and Shinsuke Mori

In this paper we propose a framework to improve word segmentation accuracy using input method logs. An input method is software used to type sentences in languages which have far more characters than the number of keys on a keyboard. The main contributions of this paper are: 1) an input method server that proposes word candidates which are not included in the vocabulary, 2) a publicly usable input method that logs user behavior (like typing and selection of word candidates), and 3) a method for improving word segmentation by using these logs. We conducted word segmentation experiments on tweets from Twitter, and showed that our method improves accuracy in this domain. Our method itself is domain-independent and only needs logs from the target domain.

A Neural Attention Model for Abstractive Sentence Summarization Sumit Chopra, Jason Weston and Alexander M. Rush

Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method is based on a local attention-based model that generates each word of the summary conditioned on the input sentence. Unlike many abstractive approaches it does not rely on any text preprocessing steps. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.

Stochastic Top-k ListNet Tianyi Luo, Dong Wang, Rong Liu and Yiqiao Pan

ListNet is a well-known listwise learning to rank model and has gained much attention in recent years. A particular problem of ListNet, however, is the high computation complexity in model training, mainly due to the large number of object permutations involved in computing the gradients. This paper proposes a stochastic ListNet approach which computes the gradient within a bounded permutation subset. It significantly reduces the computation complexity of model training and allows extension to Top-k models, which is impossible with the conventional implementation based on full-set permutations. Meanwhile, the new approach utilizes partial ranking information of human labels, which helps improve model quality. Our experiments demonstrated that the stochastic ListNet method indeed leads to better ranking performance and speeds up the model training remarkably.

Joint Mention Extraction and Classification with Mention Hypergraphs Wei Lu and Dan Roth

We present a novel model for the task of joint mention extraction and classification. Unlike existing approaches, our model is able to effectively capture overlapping mentions whose lengths are unbounded. Our model is highly scalable, with a time complexity that is linear in the number of words in the input sentence and linear in the number of possible mention classes. The model can be extended to additionally capture mention heads explicitly in a joint manner under the same time complexity. We demonstrate the effectiveness of our model through extensive experiments on standard datasets.

Intra-sentential Zero Anaphora Resolution using Subject Sharing Recognition Ryu Iida, Kentaro Torisawa, Chikara Hashimoto, Jong-Hoon Oh and Julien Kloetzer

In this work, we improve the performance of intra-sentential zero anaphora resolution in Japanese using a novel method of recognizing subject sharing relations. In Japanese, a large portion of intra-sentential zero anaphora can be regarded as subject sharing relations between predicates, that is, the subject of some predicate is also the unrealized subject of other predicates. We develop a highly accurate recognizer of subject sharing relations for pairs of predicates in a single sentence, and then construct a subject shared predicate network, which is a set of predicates that are linked by the subject sharing relations recognized by our recognizer. We finally combine our zero anaphora resolution method exploiting the subject shared predicate network and a state-of-the-art ILP-based zero anaphora resolution method. Our combined method achieved significantly better F-score than the ILP-based method alone on intra-sentential zero anaphora resolution in Japanese. To the best of our knowledge, this is the first work to explicitly use an independent subject sharing recognizer in zero anaphora resolution.

Graph-Based Collective Lexical Selection for Statistical Machine Translation jinsong su, Deyi Xiong, Xianpei Han and Junfeng Yao

Lexical selection is of great importance to statistical machine translation. In this paper, we propose a graph-based framework for collective lexical selection. The framework is established on a translation graph that captures not only local associations between source-side content words and their target translations but also target-side global dependencies in terms of relatedness among target items. We also introduce a random walk style algorithm to collectively identify translations of source-side content words that are strongly related in translation graph. We validate the effectiveness of our lexical selection framework on Chinese-English translation. Experiment results with large-scale training data show that our approach significantly improves lexical selection.

Corpus-level Fine-grained Entity Typing using Contextual Information Yadollah Yaghoobzadeh and Hinrich Schütze

We address the problem of fine-grained corpus-level entity typing, i.e., inferring from a large corpus that an entity is a member of a class such as ``food'' or ``artist''. In contrast to prior work that has focused on clean data and occurrences of entities in a limited set of contexts, we develop FIGMENT, an embedding-based entity typer that works well on noisy text and considers all contexts of the entity. We compare a global model that does typing based on aggregate corpus information and a context model that analyzes contexts individually, and find that their combination gives the best results.

Hierarchical Low-Rank Tensors for Multilingual Transfer Parsing Yuan Zhang and Regina Barzilay

Accurate multilingual transfer parsing typically relies on careful feature engineering. In this paper, we propose a hierarchical tensor-based approach for this task. This approach induces a compact feature representation by combining atomic features. However, unlike traditional tensor models, it enables us to incorporate prior knowledge about desired feature interactions, eliminating spurious feature combinations. To this end, we use a hierarchical structure that uses intermediate embeddings to capture desired feature combinations. From the algebraic view, this hierarchical tensor is equivalent to the sum of traditional tensors with shared components, and thus can be effectively trained with standard online algorithms. In both unsupervised and semi-supervised transfer scenarios, our hierarchical tensor consistently improves UAS and LAS over state-of-the-art multilingual transfer parsers and the traditional tensor model across 10 different languages.

Discourse parsing for multi-party chat dialogues Eric Kow, Stergos Afantenos, Nicholas Asher and Jérémy Perret

In this paper we present the first ever, to the best of our knowledge, discourse parser for multi-party chat dialogues. Discourse in multi-party dialogues dramatically differs from monologues since threaded conversations are commonplace rendering prediction of the discourse structure compelling. Moreover, the fact that our data come from chats renders the use of syntactic and lexical information useless since people take great liberties in expressing themselves lexically and syntactically. We use the dependency parsing paradigm as has been done in the past (Muller et al., 2012; Li et al., 2014). We learn local probability distributions and then use MST for decoding. We achieve 0.680 F1 on unlabelled structures and 0.516 F_1 on fully labeled structures which is better than many state of the art systems for monologues, despite the inherent difficulties that multi-party chat dialogues have.

A Comparison between Count and Neural Network Models Based on Joint Translation and Reordering Sequences Andreas Guta, Tamer Alkhouli, Jan-Thorsten Peter, Joern Wuebker and Hermann Ney

We propose a conversion of bilingual sentence pairs and the corresponding word alignments into novel linear sequences. These are joint translation and reordering (JTR) uniquely defined sequences, combining interdepending lexical and alignment dependencies on the word level into a single framework. They are constructed in a simple manner while capturing multiple alignments and empty words. JTR sequences can be used to train a variety of models. We investigate the performances of n-gram models with modified Kneser-Ney smoothing, feed-forward and recurrent neural network architectures when estimated on JTR sequences, and compare them to the operation sequence model (Durrani et al., 2013). Evaluations on the IWSLT German-English, WMT German-English and BOLT Chinese-English tasks show that JTR models improve state-of-the-art phrase-based systems by up to +2.2 BLEU.

Diversity in Spectral Learning for Natural Language Parsing Shashi Narayan and Shay B. Cohen

We describe an approach to incorporate diversity into spectral learning of latent-variable PCFGs (L-PCFGs). Our approach works by creating multiple spectral models where noise is added to the underlying features in the training set before the estimation of each model. We describe three ways to decode with multiple models. In addition, we describe a simple variant of the spectral algorithm for L-PCFGs that is fast and leads to compact models. Our experiments for natural language parsing, for English and German, show that we get a significant improvement over baselines comparable to state of the art. For English, we achieve the F1 score of 90.18, and for German we achieve the F1 score of 83.38.

Dependency Graph-to-String Translation Liangyou Li, Andy Way and Qun Liu

Compared to trees, graphs are more powerful to represent natural languages. The corresponding graph grammars have stronger generative capacity over structures than tree grammars as well. Based on edge replacement grammar, in this paper we propose to use a synchronous graph-to-string grammar for statistical machine translation. The graph we use is directly converted from a dependency tree. We build our translation model in the log-linear framework with 9 standard features. Large-scale experiments on ChineseÐEnglish and GermanÐEnglish tasks show that our model is significantly better than the state-of-the-art hierarchical phrase-based (HPB) model and a recent dependency tree-to-string model on BLEU, METEOR and TER scores. Experiments also suggest that our model has better ability of long-distance reordering and is more suitable for translating long sentences.

Knowledge Base Unification via Sense-based Embeddings and Disambiguation Claudio Delli Bovi, Luis Espinosa Anke and Roberto Navigli

We present a novel approach for integrating the output of many different Open Information Extraction systems into a single unified and fully disambiguated knowledge repository. Our approach consists of three main steps: (1) disambiguation of relation argument pairs via a semantically-enhanced vector space model and a large unified sense inventory; (2) ranking of semantic relations according to their degree of specificity; (3) cross-resource relation alignment and merging based on the semantic similarity of relation domains and ranges. We tested our approach on a set of four heterogeneous knowledge bases, obtaining high-quality results.

Semantic Role Labeling with Neural Network Factors Nicholas FitzGerald, Oscar Täckström, Kuzman Ganchev and Dipanjan Das

We present a new method for semantic role labeling in which arguments and semantic roles are jointly embedded in a shared vector space for a given predicate. These embeddings belong to a neural network, whose output represents the potential functions of a graphical model designed for the SRL task. We consider both local and structured learning methods and obtain state-of-the-art results on standard PropBank and FrameNet corpora with a straightforward product-of-experts model. We further show how the model can learn jointly from PropBank and FrameNet annotations to obtain additional improvements on the smaller FrameNet dataset.

Hierarchical Recurrent Neural Network for Document Modeling Rui Lin, Shujie Liu, Muyun Yang, Mu Li, Ming Zhou and Sheng Li

This paper proposes a novel hierarchical recurrent neural network language model (HRNNLM) for document modeling. After establishing a RNN to capture the coherence between sentences in a document, HRNNLM integrates it as the sentence history information into the word level RNN to predict the word sequence with cross-sentence contextual information. A two-step training approach is designed, in which sentence-level and word-level language models are approximated for the convergence in a pipeline style. Examined by the standard sentence ordering scenario, HRNNLM is proved for its better accuracy in modeling the sentence coherence. And at the word level, experimental results also indicate a significant lower model perplexity, followed by a practical better translation result when applied to a Chinese-English document translation reranking task.

Evaluation methods for unsupervised word embeddings Igor Labutov, David Mimno and Thorsten Joachims

We present a comprehensive study of evaluation methods for unsupervised embedding techniques that obtain meaningful representations of words from text. Different evaluations result in different orderings of embedding methods, calling into question the common assumption that there is one single optimal vector representation. We present new evaluation techniques that directly compare embeddings with respect to specific queries. These methods reduce bias, provide greater insight, and allow us to solicit data-driven relevance judgments rapidly and accurately through crowdsourcing.

Confounds and Consequences in Geotagged Twitter Data Umashanthi Pavalanathan and Jacob Eisenstein

Twitter is often used in quantitative studies that identify geographically-preferred topics, writing styles, and entities. These studies rely on either GPS coordinates attached to individual messages, or on the user-supplied location field in each profile. In this paper, we compare these data acquisition techniques and quantify the biases that they introduce; we also measure their effects on linguistic analysis and text-based geolocation. GPS-tagging and self-reported locations yield measurably different corpora, and these linguistic differences are partially attributable to differences in dataset composition by age and gender. Using a latent variable model to induce age and gender, we show how these demographic variables interact with geography to affect language use. We also show that the accuracy of text-based geolocation varies with population demographics, giving the best results for men above the age of 40.

Joint A* CCG Parsing and Semantic Role Labelling Mike Lewis, Luheng He and Luke Zettlemoyer

Joint models of syntactic and semantic parsing have the potential to improve performance on both tasks---but to date, the best results have been achieved with pipelines. We introduce a joint model using CCG, which is motivated by the close link between CCG syntax and semantics. Semantic roles are recovered by labelling the deep dependency structures produced by the grammar. Furthermore, because CCG is lexicalized, we show it is possible to factor the parsing model over words and introduce a new A* parsing algorithm---which we demonstrate is faster and more accurate than adaptive supertagging. Our joint model is the first to substantially improve both syntactic and semantic accuracy over a comparable pipeline, and also achieves state-of-the-art results for a non-ensemble semantic role labelling model.

Long Short-Term Memory Neural Networks for Chinese Word Segmentation Xinchi Chen, Xipeng Qiu and Xuanjing Huang

Currently most of state-of-the-art methods for Chinese word segmentation are based on supervised learning, whose features are mostly extracted from a local context. These methods cannot utilize the long distance information which is also crucial for word segmentation. In this paper, we propose a novel neural network model for Chinese word segmentation, which adopts the long short-term memory (LSTM) neural network to keep the previous important information in memory cell and avoids the limit of window size of local context. Experiments on PKU, MSRA and CTB6 benchmark datasets show that our model outperforms the previous neural network models and state-of-the-art methods.

Transition-based Dependency Parsing Using Two Heterogeneous Gated Recursive Neural Networks Xinchi Chen, Xipeng Qiu and Xuanjing Huang

Recently, neural network based dependency parsing has attracted much interest, which can effectively alleviate the problems of data sparsity and feature engineering by using the dense features. However, it is still a challenge problem to sufficiently model the complicated syntactic and semantic compositions of the dense features in neural network based methods. In this paper, we propose two heterogeneous gated recursive neural networks: tree structured gated recursive neural network (Tree-GRNN) and directed acyclic graph structured gated recursive neural network (DAG-GRNN). Then we integrate them to automatically learn the compositions of the dense features for transition-based dependency parsing. Specifically, Tree-GRNN models the feature combinations for the trees in stack, which already have partial dependency structures. DAG-GRNN models the feature combinations of the nodes whose dependency relations have not been built yet. Experiment results on two prevalent benchmark datasets (PTB3 and CTB5) show the effectiveness of our proposed model.

Efficient Algorithm for Incorporating Knowledge into Topic Models Yi Yang, Doug Downey and Jordan Boyd-Graber

Latent Dirichlet allocation (LDA) is a popular topic modeling technique for exploring hidden topics in text corpora. Increasingly, topic modeling is trying to scale to larger topic spaces, and utilize richer forms of prior knowledge, such as word correlations or document labels. However, inference is cumbersome for LDA models with prior knowledge. As a result, LDA models that use prior knowledge only work in small-scale scenarios. In this work, we propose a factor graph framework, Sparse Constrained LDA (SC-LDA), for efficiently incorporating prior knowledge into LDA. In experiments, we evaluate SC-LDA's ability to incorporate word correlation knowledge and document label knowledge on three benchmark datasets. Compared to several baseline methods, SC-LDA achieves comparable performance but runs significantly faster.

Open-Domain Name Error Detection using a Multi-Task RNN Hao Cheng, Hao Fang and Mari Ostendorf

Out-of-vocabulary name errors in speech recognition create significant problems for downstream language processing, but the fact that they are rare poses challenges for automatic detection, particularly in an open-domain scenario. To address this problem, a multi-task recurrent neural network language model for sentence-level name detection is proposed for use in combination with out-of-vocabulary word detection. The sentence-level model is also effective for leveraging external text data. Experiments show a 26% improvement in name-error detection F-score.

Detecting Information-Heavy Sentences: A Cross-Language Case Study Junyi Jessy Li and Ani Nenkova

Some sentences, even if they are grammatical, contain too much information and the content they convey would be more accessible to a reader if expressed in multiple sentences. We call such sentences information heavy. In this paper we introduce the task of detecting information-heavy sentences in cross-lingual context. Specifically we develop methods to identify sentences in Chinese for which English speakers would prefer translations consisting of more than one sentence. We base our analysis and definitions on evidence from multiple human translations and reader preferences on flow and understandability. We show that machine translation quality when translating information heavy sentences is markedly worse than overall quality and that this type of sentence are fairly common in Chinese news. We demonstrate that sentence length and punctuation usage in Chinese are not sufficient clues for accurately detecting heavy sentences and present a richer classification model that accurately identifies these sentences.

Distributional vectors encode referential attributes Abhijeet Gupta, Gemma Boleda, Marco Baroni and Sebastian Padó

Distributional methods have proven to excel at capturing fuzzy, graded aspects of meaning (Italy is more similar to Spain than to Germany). In contrast, it is difficult to extract the values of more specific attributes of word referents from distributional representations, attributes of the kind typically found in structured knowledge bases (Italy has 60 million inhabitants). In this paper, we pursue the hypothesis that distributional vectors also implicitly encode referential attributes. We show that a standard supervised regression model is in fact sufficient to retrieve such attributes to a reasonable degree of accuracy: When evaluated on the prediction of both categorical and numeric attributes of countries and cities as stored in a structured knowledge base, the model consistently reduces baseline error by 30%, and is not far from the upper bound. Further analysis provides qualitative insight into the task, such as which types of attributes are harder to learn from distributional information.

Modeling Reportable Events as Turning Points in Narrative Jessica Ouyang and Kathy McKeown

We present novel experiments in modeling the rise and fall of story characteristics within narrative, leading up to the Most Reportable Event (MRE), the compelling event that is the nucleus of the story. We construct a corpus of personal narratives from the bulletin board website Reddit, using the organization of Reddit content into topic-specific communities to automatically identify narratives. Leveraging the structure of Reddit comment threads, we automatically label a large dataset of narratives. We present a change-based model of narrative that tracks changes in formality, affect, and other characteristics over the course of a story, and we use this model in distant supervision and self-training experiments that achieve significant improvements over the baselines at the task of identifying MREs.

A Computational Cognitive Model of Novel Word Generalization Aida Nematzadeh, Erin Grant and Suzanne Stevenson

A key challenge in vocabulary acquisition is learning which of the many possible meanings is appropriate for a word. The word generalization problem refers to how children associate a word such as dog with a meaning at the appropriate category level in the taxonomy of objects, such as Dalmatians, dogs, or animals. We present the first computational study of word generalization integrated within a word learning model. The model simulates child and adult patterns of word generalization in a word-learning task. These patterns arise due to the interaction of type and token frequencies in the input data, an influence often observed in people's generalization of linguistic categories.

Learning Semantic Composition to Detect Non-compositionality of Multiword Expressions Majid Yazdani, Meghdad Farahmand and James Henderson

Non-compositionality of multi word expressions is an intriguing problem that can be the source of error in a variety of NLP tasks such as language generation, machine translation and word sense disambiguation. In this work we present a method of detecting non-compositional English noun compounds by learning a composition function. We explore a range of possible models for semantic composition, empirically evaluate these models and propose an improvement method over the most accurate ones. We show that a complex function such as polynomial projection can learn semantic composition and identify non-compositionality in an unsupervised way, beating all other baselines ranging from simple to complex. We show further improvements by also training a decomposition function, and with a form of EM algorithm over latent compositionality annotations.

Do You See What I Mean? Visual Resolution of Linguistic Ambiguities Yevgeni Berzak, Andrei Barbu, Daniel Harari, Boris Katz and Shimon Ullman

Understanding language goes hand in hand with the ability to integrate complex contextual information obtained via perception. In this work, we present a novel task for grounded language understanding: disambiguating a sentence given a visual scene which depicts one of the possible interpretations of that sentence. To this end, we introduce a new multimodal corpus containing ambiguous sentences, representing a wide range of syntactic, semantic and discourse ambiguities, coupled with videos that visualize the different interpretations for each sentence. We address this task by extending a vision model which determines if a sentence is depicted by a video. We demonstrate how such a model can be adjusted to recognize different interpretations of the same underlying sentence, allowing to disambiguate sentences in a unified fashion across the different ambiguity types. Potential applications of this task include video retrieval, where capturing different meanings of a sentential query can be vital for obtaining good results.

Reordering Context-Free Grammar Induction Miloš Stanojević and Khalil Sima'an

We present a novel approach for unsupervised induction of a Reordering Grammar using a modified form of permutation trees (Zhang and Gildea, 2007), which we apply to preordering in phrase-based machine translation. Unlike previous approaches, we induce in one step both the hierarchical structure and the transduction function over it from word-aligned parallel corpora. Furthermore, our model (1) handles non-ITG reordering patterns (up to 5-ary branching), 2) is learned from all derivations by treating not only labeling but also bracketing as latent variable, (3) is entirely unlexicalized at the level of reordering rules, and (4) requires no linguistic annotation. Our model is evaluated both for accuracy in predicting target order, and for it's impact on translation quality. We report significant performance gains over phrase reordering, and over two known preordering baselines for English-Japanese.

Building a shared world: mapping distributional to model-theoretic semantic spaces Aurélie Herbelot and Eva Maria Vecchi

In this paper, we introduce an approach to automatically map a standard distributional semantic space onto a set-theoretic model. We predict that there is a functional relationship between distributional information and vectorial concept representations in which dimensions are predicates and weights are generalised quantifiers. In order to test our prediction, we learn a model of such relationship over a publicly available dataset of feature norms annotated with natural language quantifiers. Our initial experimental results show that, at least for domain-specific data, we can indeed map between formalisms, and generate high-quality vector representations which correspond to generalised quantifiers in a set-theoretic model. We further investigate the generation of natural language quantifiers from such vectors.

Conversation Trees: A Grammar Model for Topic Structure in Forums Annie Louis and Shay B. Cohen

Online forum discussions proceed differently from face-to-face conversations and any single thread on a forum contains posts on different subtopics. This work aims to characterize the content of a forum thread as a 'conversation tree' of topics. We present models that jointly perform two tasks: segment a thread into sub-parts, and assign a topic to each part. The core idea of our work is a definition of topic structure using probabilistic grammars. By leveraging the flexibility of two grammar formalisms, Context-Free Grammars and Linear Context-Free Rewriting Systems, our models create desirable structures for forum threads: our topic segmentation is hierarchical, links non-adjacent segments on the same topic, and jointly labels the topic during segmentation. We show that our models outperform three tree generation baselines.

RELLY: Inferring Hypernym Relationships Between Relational Phrases Adam Grycner, Gerhard Weikum, Jay Pujara, James Foulds and Lise Getoor

Relational phrases (e.g., "got married to") and their hypernyms (e.g., "is a relative of") are central for many tasks including question answering, open information extraction, paraphrasing, and entailment detection. This has motivated the development of linguistic resources such as DIRT (Lin and Pantel, 2001), PATTY (Nakashole et al., 2012), and WiseNet (Moro and Navigli, 2012), which systematically collect and organize relational phrases. These resources have demonstrable practical benefits, but are each limited due to noise, sparsity, or size. We present a new general-purpose method, RELLY, for constructing a large hypernymy graph of relational phrases with high-quality subsumptions. Our graph induction approach integrates small high-precision knowledge bases together with larger automatically curated resources, and reasons collectively to combine these resources into a consistent graph, using a recently developed probabilistic programming language called probabilistic soft logic (PSL) (Bach et al., 2015). We use RELLY to construct a hypernymy graph consisting of 20K relational phrases with 35K hypernymy links. We extensively evaluate our hypernymy graph both intrinsically and extrinsically. Our evaluation indicates a hypernymy link precision of 78%, and demonstrates the value of this resource for a document-relevance ranking task.

Extracting Relations between Non-Standard Entities using Distant Supervision and Imitation Learning Isabelle Augenstein, Andreas Vlachos and Diana Maynard

Distantly supervised approaches have become popular in recent years as they allow training relation extractors without text-bound annotation, using instead known relations from a knowledge base and a large textual corpus from an appropriate domain. While state of the art distant supervision approaches use off-the-shelf named entity recognition (NER) systems to identify relation arguments, discrepancies in domain or genre between the data used for NER training and the intended domain for the relation extractor can lead to low performance. This is particularly problematic for "non-standard" named entities such as album which would fall into the MISC category. We propose to ameliorate this issue by jointly training the named entity classifier and the relation extractor using imitation learning which reduces structured prediction learning to classification learning. We further experiment with different features and compare against a baseline using off-the-shelf supervised NER system. Experiments show that our approach improves on the baseline for both "standard" and "non-standard" named entities by 19 points in average precision. Furthermore, we show that Web features such as links and lists increase average precision by 7 points.

Learning natural language inference from a large annotated corpus Samuel R. Bowman, Gabor Angeli, Christopher Potts and Christopher D. Manning

Understanding entailment and contradiction is fundamental to understanding natural language, and inference about entailment and contradiction is a valuable testing ground for the development of semantic representations. However, machine learning research in this area has been dramatically limited by the lack of large-scale resources. To address this, we introduce a new freely available corpus of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning. At 570K pairs, it is two orders of magnitude larger than all other resources of its type. We find that this increase in scale allows lexicalized classifiers to outperform some sophisticated existing entailment models, and that it allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.

Using Personal Traits For Brand Preference Prediction Chao Yang, Shimei Pan, Jalal U. Mahmud, Huahai Yang and Padmini Srinivasan

In this paper, we present the first comprehensive study of the relationship between a person's traits and his/her brand preferences. In our analysis, we included a large number of character traits such as personality, personal values and individual needs. These trait features were obtained from both a psychometric survey and automated social media analytics. We also included an extensive set of brand names from diverse product categories. From this analysis, we want to shed some light on (1) whether it is possible to use personal traits to infer an individual's brand preferences (2) whether the trait features automatically inferred from social media are good proxies for the ground truth character traits in brand preference prediction.

Auto-Sizing Neural Networks: With Applications to n-gram Language Models Kenton Murray and David Chiang

Neural networks have been shown to improve performance across a range of natural-language tasks while addressing some issues with traditional models such as size. However, designing and training them can be complicated. Frequently, researchers resort to repeated experimentation across a range of parameters to pick optimal settings. In this paper, we address the issue of choosing the correct number of units in the hidden layers. We introduce a method for automatically adjusting network size by pruning out hidden units through $\ell_{\infty,1}$ and $\ell_{2,1}$ regularization. We apply this method to language modeling and demonstrate its ability to correctly choose the number of hidden units while maintaining perplexity. We also include these models in a machine translation decoder and show that these smaller neural models maintain the significant improvements of their unpruned versions.

Improved Relation Extraction with Feature-Rich Compositional Embedding Models Matthew R. Gormley, Mo Yu and Mark Dredze

Compositional embedding models build a representation (or embedding) for a linguistic structure based on its component word embeddings. We propose a Feature-rich Compositional Embedding Model (FCM) for relation extraction that is expressive, generalizes to new domains, and is easy-to-implement. The key idea is to combine both (unlexicalized) hand-crafted features with learned word embeddings. The model is able to directly tackle the difficulties met by traditional compositional embeddings models, such as handling arbitrary types of sentence annotations and utilizing global information for composition. We test the proposed model on two relation extraction tasks, and demonstrate that our model outperforms both previous compositional models and traditional feature rich models on the ACE 2005 relation extraction task, and the SemEval 2010 relation classification task. The combination of our model and a log-linear classifier with hand-crafted features gives state-of-the-art results.

Joint prediction in MST-style discourse parsing for argumentation mining Andreas Peldszus and Manfred Stede

We introduce a new approach to argumentation mining that we applied to a parallel German/English corpus of short texts annotated with argumentation structure. We focus on structure prediction, which we break into a number of subtasks: relation identification, central claim identification, role classification, and function classification. Our new model jointly predicts different aspects of the structure by combining the different subtask predictions in the edge weights of an evidence graph; we then apply a standard MST decoding algorithm. This model not only outperforms two reasonable baselines and two data-driven models of global argument structure for the difficult subtask of relation identification, but also improves the results for central claim identification and function classification and it compares favorably to a complex mstparser pipeline.

Molding CNNs for text: non-linear, non-consecutive convolutions Tao Lei, Regina Barzilay and Tommi Jaakkola

The success of deep learning often derives from well-chosen operational building blocks. In this work, we revise the temporal convolution operation in CNNs to better adapt it to text processing. Instead of concatenating word representations, we appeal to tensor algebra and use low-rank n-gram tensors to directly exploit interactions between words already at the convolution stage. Moreover, we extend the n-gram convolution to non-consecutive words to recognize patterns with intervening words. Through a combination of low-rank tensors, and pattern weighting, we can efficiently evaluate the resulting convolution operation via dynamic programming. We test the resulting architecture on standard sentiment classification and news categorization tasks. Our model achieves state-of-the-art performance both in terms of accuracy and training speed among a variety of (neural network) models.

A Dynamic Programming Algorithm for Computing N-gram Posteriors from Lattices Dogan Can and Shrikanth Narayanan

Efficient computation of n-gram posterior probabilities from lattices has applications in lattice-based minimum Bayes-risk decoding in statistical machine translation and the estimation of expected document frequencies from spoken corpora. In this paper, we present an algorithm for computing the posterior probabilities of all n-grams in a lattice and constructing a minimal deterministic weighted finite-state automaton associating each n-gram with its posterior for efficient storage and retrieval. Our algorithm builds upon the best known algorithm in literature for computing n-gram posteriors from lattices and leverages the following observations to significantly improve the time and space requirements: i) the n-grams for which the posteriors will be computed typically comprises all n-grams in the lattice up to a certain length, ii) posterior is equivalent to expected count for an n-gram that do not repeat on any path, iii) there are efficient algorithms for computing n-gram expected counts from lattices. We present experimental results comparing our algorithm with the best known algorithm in literature as well as a baseline algorithm based on weighted finite-state automata operations.

Exploring Markov Logic Networks for Question Answering Tushar Khot, Niranjan Balasubramanian, Eric Gribkoff, Ashish Sabharwal, Peter Clark and Oren Etzioni

Our goal is to answer elementary-level science questions using knowledge extracted automatically from textbooks, expressed in a subset of first-order logic. Such knowledge is incomplete and noisy. Markov Logic Networks (MLNs) seem a natural model for expressing such knowledge, but the exact way of leveraging MLNs is by no means obvious. We investigate three ways of applying MLNs to our task. First, we simply use the extracted science rules directly as MLN clauses and exploit the structure present in hard constraints to improve tractability. Second, we interpret science rules as describing prototypical entities, resulting in a drastically simplified but brittle network. Our third approach, called Praline, uses MLNs to align lexical elements as well as define and control how inference should be performed in this task. Praline demonstrates a 15% accuracy boost and a 10x reduction in runtime as compared to other MLN-based methods, and comparable accuracy to word-based baseline approaches.

Estimation of Discourse Segmentation Labels from Crowd Data Ziheng Huang, Jialu Zhong and Rebecca J. Passonneau

For annotation tasks involving independent items, probabilistic models have been used to infer ground truth labels from crowdsourcing, where many annotators independently label the same data. Such models have been shown to produce results superior to taking the majority vote as the ground truth. This paper presents a new dataset and new methods for sequential data where the labels are not independent. The data consists of crowd labels for annotation of discourse segment boundaries assigned to fifty recorded telephone conversations. To estimate ground truth labels, two approaches are presented that extend Hidden Markov Models to relax the independence assumption on observed data, based on the observation that segments tend be several utterances long. Results of the models are checked using metrics that test whether the same annotators maintain the same relative performance across different conversations.

Semantic Framework for Comparison Structures in Natural Language Omid Bakhshandh and James Allen

Comparison is one of the most important phenomena in language for expressing objective and subjective facts about various entities. Systems that can understand and reason over comparatives can play a major role in the applications which require deeper understanding of language. In this paper we present a novel semantic framework for representing the meaning of comparison structures in natural language, which models comparisons as predicate-argument pairs inter-connected with semantic roles. Our framework supports not only adjectives, but also adverbial, nominal, and verbal comparatives. With this paper, we release a novel dataset of gold-standard comparison structures annotated according to our semantic framework.

Towards the Extraction of Customer to Customer Suggestions in Reviews Sapna Negi and Paul Buitelaar

In this work, we target the automatic detection of suggestion expressing sentences in customer reviews. Such sentences mainly comprise of advice, recommendations and tips to the fellow customers, and sometimes suggestions for improvements to the manufacturers and providers as well. The scope of this work is limited to the former. Since this is a young problem, prior to the development of a solution, there is a need for a well formed problem definition and benchmark datasets. This work provides a 3 fold contribution, problem definition, benchmark dataset, and an approach for detection of suggestions to the customers. We identify two forms of suggestion expressions in reviews: implicit and explicit. We limit the scope of this work to the explicit ones. The problem is framed as a sentence classification problem and a set of linguistically motivated features are proposed in order to classify sentences as suggestion and non-suggestion sentences. Some interesting observations and analysis are also reported.

Neural Networks for Open Domain Targeted Sentiment Meishan Zhang, Yue Zhang and Duy Tin Vo

Open domain targeted sentiment is the joint information extraction task that finds target mentions together with the sentiment towards each mention from a text corpus. The task is typically modeled as a sequence labeling problem, and solved using state-of-the-art labelers such as CRF. We empirically study the effect of word embeddings and automatic feature combinations on the task by extending a CRF baseline using neural networks, which have demonstrated large potentials for sentiment analysis. Results show that the neural model can give better results by significantly increasing the recall. In addition, we propose a novel integration of neural and discrete features, which combines their relative advantages, leading to significantly higher results compared to both baselines.

Sarcastic or Not: Word-Embeddings to Predict the Literal or Sarcastic Meaning of Words Debanjan Ghosh, Weiwei Guo and Smaranda Muresan

Sarcasm is generally characterized as a figure of speech that involves the substitution of a literal by a figurative meaning, which is usually the opposite of the original literal meaning. We re-frame the sarcasm detection task as a word-sense disambiguation problem, where the sense of a word is either the literal or the sarcastic sense. We call this the Literal/Sarcastic Sense Disambiguation (LSSD) task. We address two issues: 1) collection of a set of target words that can have either literal or sarcastic meanings depending on context; and 2) given an utterance and a target word, automatically detect whether the target word is used in the literal or the sarcastic sense. For the latter, we investigate several word-sense disambiguation methods and show that a Support Vector Machines (SVM) classifier with a modified kernel using word embeddings achieves a 7-10% F1 improvement over a strong lexical baseline.

An Alignment-Based Model for Compositional Semantics and Sequential Reasoning Jacob Andreas and Dan Klein

This paper describes an alignment-based model for interpreting natural language instructions in context. We approach instruction following as a sequence prediction problem, scoring sequences of actions conditioned on structured observations of text and the environment. Our model explicitly represents both the low-level compositional structure of individual actions and observations, and the high-level search problem that gives rise to full plans. To demonstrate the model's flexibility, we apply it to a diverse set of benchmark tasks. On every task, we outperform strong task-specific baselines, including several new state-of-the-art results.

Joint Prediction for Entity/Event-Level Sentiment Analysis using Probabilistic Soft Logic Models Lingjia Deng and Janyce Wiebe

In this work, we build an entity/event-level sentiment analysis system, which is able to recognize and infer both explicit and implicit sentiments among entities and events in the text. We design Probabilistic Soft Logic models to integrate explicit sentiments, inference rules, and +/-effect event information (events that positively or negatively affect entities) together. The experiments show that the method is able to greatly improve over baseline accuracies in recognizing entity/event-level sentiments.

Using Content-level Structures for Summarizing Microblog Repost Trees Jing Li, Wei Gao, Zhongyu Wei, Baolin Peng and Kam-Fai Wong

A microblog repost tree provides strong clues on how an event described therein develops. To help social media users capture the main clues of an event on microblogging sites, we propose a novel repost tree summarization framework by effectively differentiating two kinds of messages on repost trees called leaders and followers, which are derived from content-level structure information, i.e., microblog contents and the reposting relations. To this end, Conditional Random Fields (CRF) model is used to detect leaders across repost tree paths. We then present a variant of random-walk-based summarization model to rank and select salient messages based on the result of leader detection. To reduce the error propagation cascaded from leader detection, we improve the framework by enhancing the random walk with adjustment steps for sampling from leader probabilities given all the reposting messages. For evaluation, we construct two annotated corpora, one for leader detection, and the other for repost tree summarization. Experimental results confirm the effectiveness of our method.

Traversing Knowledge Graphs in Vector Space Kelvin Guu, John Miller and Percy Liang

Path queries on a knowledge graph can be used to answer compositional questions such as "What languages are spoken by people living in Lisbon?". However, knowledge graphs often have missing facts (edges) which disrupts path queries. Recent models for knowledge base completion impute missing facts by embedding knowledge graphs in vector spaces. We show that these models can be recursively applied to answer path queries, but that they suffer from cascading errors. This motivates a new "compositional" training objective, which dramatically improves all models' ability to answer path queries, in some cases more than doubling accuracy. On a standard knowledge base completion task, we also demonstrate that compositional training acts as a novel form of structural regularization, reliably improving performance across all base models (reducing errors by up to 43%) and achieving new state-of-the-art results.

Improving Semantic Parsing with Enriched Synchronous Context-Free Grammar Junhui Li, Muhua Zhu, Wei Lu and Guodong Zhou

Semantic parsing maps a sentence in natural language into a structured meaning representation. Previous studies show that semantic parsing with synchronous context-free grammars (SCFGs) achieves favorable performance over most other alternatives. Motivated by the observation that the performance of semantic parsing with SCFGs is closely tied to the translation rules, this paper explores to extend translation rules with high quality and increased coverage in three ways. First, we introduce structure informed non-terminals, better guiding the parsing in favor of well formed structure, instead of using a uniformed non-terminal in SCFGs. Second, we examine the difference between word alignments for semantic parsing and statistical machine translation (SMT) to better adapt word alignment in SMT to semantic parsing. And finally, we address the unknown word translation issue via synthetic translation rules. Evaluation on the standard GeoQuery benchmark dataset shows that our approach outperforms the state-of-the-art across various languages, including English, German and Greek.

Learning to Recognize Affective Polarity in Similes Ashequl Qadir, Ellen Riloff and Marilyn Walker

A simile is a comparison between two essentially unlike things, such as "Jane swims like a dolphin". Similes often express a positive or negative sentiment toward something, but recognizing the polarity of a simile can depend heavily on world knowledge. For example, "memory like an elephant" is positive, but "memory like a sieve" is negative. Our research explores methods to recognize the polarity of similes on Twitter. We train classifiers using lexical, semantic, and sentiment features, and experiment with both manually and automatically generated training data. Our approach yields good performance at identifying positive and negative similes, and substantially outperforms existing sentiment resources.

Incorporating Trustiness and Collective Synonym/Contrastive Evidence into Taxonomy Construction Tuan Luu Anh, Jung-jae Kim and See Kiong Ng

Taxonomy plays an important role in many applications by organizing domain knowledge into a hierarchy of is-a relations between terms. Previous works on the taxonomic relation identification from text corpora lack in two aspects: 1) They do not consider the trustiness of individual source texts, which is important to filter out incorrect relations from unreliable sources. 2) They also do not consider collective evidence from synonyms and contrastive terms, where synonyms may provide additional supports to taxonomic relations, while contrastive terms may contradict them. In this paper, we present a method of taxonomic relation identification that incorporates the trustiness of source texts measured with such techniques as PageRank and knowledge-based trust, and the collective evidence of synonyms and contrastive terms identified by linguistic pattern matching and machine learning. The experimental results show that the proposed features can consistently improve performance up to 4%-10% of F-measure.

Broad-coverage CCG Semantic Parsing with AMR Yoav Artzi, Kenton Lee and Luke Zettlemoyer

We propose a grammar induction technique for AMR semantic parsing. While previous grammar induction techniques were designed to re-learn a new parser for each target application, the recently annotated AMR bank provides a unique opportunity to induce a single model for understanding broad-coverage newswire text and support a wide range of applications. We present a new model that combines CCG parsing to recover compositional aspects of meaning and a factor graph to model non-compositional phenomena, such as anaphoric dependencies. Our approach achieves 66.2 Smatch F1 score on the AMR bank, significantly outperforming the previous state of the art.

Effective Approaches to Attention-based Neural Machine Translation Thang Luong, Hieu Pham and Christopher D. Manning

The attentional mechanism has been used in neural machine translation (NMT) to selectively focus on parts of the source sentence during translation. However, there has been few work exploring useful architectures for attention-based NMT. This work examines two simple and effective classes of the attentional mechanism: the global approach which always attends to all source words and the local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT'14 translation tasks between English and German in both directions. Our attentional NMTs provide a boost of up to 2.8 BLEU over non-attentional systems. Furthermore, by feeding the attentional vector as an additional input to the next time step, we achieve a further gain of up to 1.9 BLEU.

Representing Text for Joint Embedding of Text and Knowledge Bases Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury and Michael Gamon

Models that learn to represent textual and knowledge base relations in the same continuous latent space are able to perform joint inferences among the two kinds of relations and obtain high accuracy on knowledge base completion (Riedel et al. 2013). In this paper we propose a model that captures the compositional structure of textual relations, and jointly optimizes entity, knowledge base, and text relation representations. The proposed model significantly improves performance over a model that does not share parameters among textual relations with common sub-structure.

Dual Decomposition Inference for Graphical Models over Strings Nanyun Peng, Ryan Cotterell and Jason Eisner

We investigate dual decomposition for joint MAP inference of many strings. Given an arbitrary graphical model, we decompose it into small acyclic sub-models, whose MAP configurations can be found by finite-state composition and dynamic programming. We force the solutions of these subproblems to agree on overlapping variables, by tuning Lagrange multipliers for an adaptively expanding set of variable-length n-gram count features. This is the first inference method for arbitrary graphical models over strings that does not require approximations such as random sampling, message simplification, or a bound on string length. Provided that the inference method terminates, it gives a certificate of global optimality (though MAP inference in our setting is undecidable in general). On our global phonological inference problems, it does indeed terminate, and achieves more accurate results than max-product and sum-product loopy belief propagation.

Comparing Word Representations for Implicit Discourse Relation Classification Chloé Braud and Pascal Denis

This paper presents a detailed comparative framework for assessing the usefulness of unsupervised word representations for identifying so-called implicit discourse relations. Specifically, we compare standard one-hot word pair representations against low-dimensional representations based on Brown clusters and word embeddings. We also consider various word vector combination schemes for deriving discourse segment representations from word vectors, and compare representations based either on all words or limited to head words. Our main finding is that denser representations systematically outperform sparser ones and give state-of-the-art performance or above without the need for additional hand-crafted features, thus alleviating the need for traditional external resources.

Chinese Word Segmentation Leveraging Bilingual Unlabeled Data Wei Chen and Bo Xu

This paper presents a bilingual semi-supervised Chinese word segmentation (CWS) method that leverages the natural segmenting information of English sentences. The proposed method involves learning three levels of features, namely, character-level, phrase-level and sentence-level, provided by multiple sub-models. We use a sub-model of conditional random fields (CRF) to learn monolingual grammars, a sub-model based on character-based alignment to obtain explicit segmenting knowledge, and another sub-model based on transliteration similarity to detect out-of-vocabulary (OOV) words. Moreover, we propose a sub-model leveraging neural network to ensure the proper treatment of the semantic gap and a phrase-based translation sub-model to score the translation probability of the Chinese segmentation and its corresponding English sentences. A cascaded log-linear model is employed to combine these features to segment bilingual unlabeled data, the results of which are used to justify the original supervised CWS model. The evaluation shows that our method results in superior results compared with those of the state-of-the-art monolingual and bilingual semi-supervised models that have been reported in the literature.

Posterior calibration and exploratory analysis for natural language processing models Khanh Nguyen and Brendan O'Connor

Many models in natural language processing define probabilistic distributions over linguistic structures. We argue that (1) the quality of a model's posterior distribution an and should be directly evaluated, as to whether probabilities correspond to empirical frequencies; and (2) NLP uncertainty can be projected not only to pipeline components, but also to exploratory data analysis, telling a user when to, and not to, trust the NLP analysis. We present methods of analyzing calibration, and compare several commonly used models. We also contribute a coreference sampling algorithm that can create confidence intervals for a political event extraction task.

Part-of-speech Taggers for Low-resource Languages using CCA Features Young-Bum Kim, Benjamin Snyder and Ruhi Sarikaya

In this paper, we address the challenge of creating accurate and robust part-of-speech taggers for low-resource languages. We propose a method that leverages existing parallel data between the target language and a large set of resource-rich languages without ancillary resources such as tag dictionaries. Crucially, we use CCA to induce latent word representations that incorporate cross-genre distributional cues, as well as projected tags from a full array of resource-rich languages. We develop a probability-based confidence model to identify words with highly likely tag projections and use these words to train a multi-class SVM using the CCA features. Our method yields average performance of 85% accuracy for languages with almost no resources, outperforming a state-of-the-art partially-observed CRF model.

Semantic Annotation for Microblog Topics Using Wikipedia Temporal Information Tuan Tran, Nam Khanh Tran, Asmelash Teka Hadgu and Robert Jäschke

In this paper we study the problem of semantic annotation for a trending hashtag which is the crucial step towards analyzing user behavior in social media, yet has been largely unexplored. We tackle the problem via linking to entities from Wikipedia. We incorporate the social aspects of trending hashtags by identifying prominent entities for the annotation so as to maximize the information spreading in entity networks. We exploit temporal dynamics of entities in Wikipedia, namely Wikipedia edits and page views to improve the annotation quality. Our experiments show that we significantly outperform the established methods in tweet annotation.

Extracting Condition-Opinion Relations Toward Fine-grained Opinion Mining Yuki Nakayama and Atsushi Fujii

A fundamental issue in opinion mining is to search a corpus for opinion units, each of which typically comprises the evaluation by an author for a target object from an aspect, such as "This hotel is in a good location". However, few attempts have been made to address cases where the validity of an evaluation is restricted on a condition in the source text, such as "for traveling with small kids". In this paper, we propose a method to extract condition-opinion relations from online reviews, which enables fine-grained analysis for the utility of target objects depending the user attribute, purpose, and situation. Our method uses supervised machine learning to identify sequences of words or phrases that comprise conditions for opinions. We propose several features associated with lexical and syntactic information, and show their effectiveness experimentally.

Joint Entity Recognition and Disambiguation Gang Luo

Extracting named entities in text and linking extracted names to a given knowledge base are fundamental tasks in applications of text understanding. Existing systems typically run a Named Entity Recognition (NER) model to extract entity names first, then run a Entity Linking model to link extracted names to a knowledge base. NER and Linking models are usually trained separately, and the mutual dependency between the two tasks is ignored. We proposed JERL, Joint Entity Recognition and Linking, to jointly model NER and Linking tasks and capture the mutual dependency between them. It allows the information from each task to improve the performance on the other. To our best knowledge, JERL is the first model to jointly optimize NER and Linking tasks together completely. In experiments on the CoNLL'03/AIDA data set, JERL outperforms state-of-art NER and Linking systems on both tasks, and we found improvements of 0.4% absolute F1 for NER on CoNLL'03, and 0.36% absolute precision@1.0 for Linking on AIDA. Since NER is a widely studied problem, we believe our improvement is significantly.

Sieve-Based Spatial Relation Extraction with Expanding Parse Trees Jennifer D'Souza and Vincent Ng

Spatial relation extraction is the under-investigated task of identifying the relations on spatial elements. A key challenge introduced by the recent SpaceEval shared task on spatial relation extraction is the identification of MOVELINKs, a type of spatial relation in which up to eight spatial elements can participate. To handle the complexity of extracting MOVELINKs, we combine two ideas that have been successfully applied to information extraction tasks, namely tree kernels and multi-pass sieves, proposing the use of an expanding parse tree as a novel structured feature for training MOVELINK classifiers. Our approach yields state-of-the-art results on two key subtasks in SpaceEval.

A Generative Word Embedding Model and its Low Rank Positive Semidefinite Solution Shaohua Li, Jun Zhu and Chunyan Miao

Most existing word embedding methods can be categorized into Neural Embedding Models and Matrix Factorization (MF)-based methods. However some models are opaque to probabilistic interpretation, and MF-based methods, typically solved using Singular Value Decomposition (SVD), may incur loss of corpus information. In addition, it is desirable to incorporate global latent factors, such as topics, sentiments or writing styles, into the word embedding model. Since generative models provide a principled way to incorporate latent factors, we propose a generative word embedding model, which is easy to interpret, and can serve as a basis of more sophisticated latent factor models. The model inference reduces to a low rank weighted positive semidefinite approximation problem. Its optimization is approached by eigendecomposition on a submatrix, followed by online blockwise regression, which is scalable and avoids the information loss in SVD. In experiments on 7 common benchmark datasets, our vectors are competitive to word2vec, and better than other MF-based methods.

Density-Driven Cross-Lingual Transfer of Dependency Parsers Mohammad Sadegh Rasooli and Michael Collins

We present a novel method for the cross-lingual transfer of dependency parsers. Our goal is to induce a dependency parser in a target language of interest without any direct supervision: instead we assume access to parallel translations between the target and one or more source languages, and to supervised parsers in the source language(s). Our key contributions are to show the utility of dense projected structures when training the target language parser, and to introduce a novel learning algorithm that makes use of dense structures. Results on several languages show an absolute improvement of 5.51% in average dependency accuracy over the state-of-the-art method of (Ma and Xia, 2014). Our average dependency accuracy of 82.18% compares favourably to the accuracy of fully supervised methods.

Name List Only? Target Entity Disambiguation in Short Texts Yixin Cao, Juanzi Li, Xiaofei Guo, Shuanhu Bai, Heng Ji and Jie Tang

Target entity disambiguation (TED), the task of identifying target entities of the same domain, has been recognized as a critical step in various important applications. In this paper, we propose a graph-based model called TremenRank to collectively identify target entities in short texts given a name list only. TremenRank propagates trust within the graph, allowing for an arbitrary number of target entities and texts using inverted index technology. Furthermore, we design a multi-layer directed graph to assign different trust levels to short texts for better performance. The experimental results demonstrate that our model outperforms state-of-the-art methods with an average gain of 24.8% in accuracy and 15.2% in the F1-measure on three datasets in different domains.

C3EL: A Joint Model for Cross-Document Co-Reference Resolution and Entity Linking Sourav Dutta and Gerhard Weikum

Cross-document co-reference resolution (CCR) computes equivalence classes over textual mentions denoting the same entity in a document corpus. Named-entity linking (NEL) disambiguates mentions onto entities present in a knowledge base (KB) or maps them to null if not present in the KB. Traditionally, CCR and NEL have been addressed separately. However, such approaches miss out on the mutual synergies if CCR and NEL were performed jointly. This paper proposes C3EL, an unsupervised framework combining CCR and NEL for jointly tackling both problems. C3EL incorporates results from the CCR stage into NEL, and vice versa: additional global context obtained from CCR improves the feature space and performance of NEL, while NEL in turn provides distant KB features for already disambiguated mentions to improve CCR. The CCR and NEL steps are interleaved in an iterative algorithm that focuses on the highest-confidence still unresolved mentions in each iteration. Experimental results on two different corpora, news-centric and web-centric, demonstrate significant gains over state-of-the-art baselines for both CCR and NEL.

A Single Word is not Enough: Ranking Multiword Expressions Using Distributional Semantics Martin Riedl and Chris Biemann

We present a new unsupervised mechanism, which ranks word n-grams according to their multiwordness. It heavily relies on a new uniqueness measure that computes, based on a distributional thesaurus, how often an n-gram could be replaced in context by a single-worded term. In addition with a punishment mechanism for incomplete terms this forms a new measure called DRUID. Results show large improvements on two small test sets over competitive baselines. We demonstrate the scalability of the method to large corpora, and the independence of the measure of shallow syntactic filtering.

Syntactic Dependencies and Distributed Word Representations for Analogy Detection and Mining Likun Qiu, Yue Zhang and Yanan Lu

Distributed word representations capture relational similarities by means of vector arithmetics, giving high accuracies on analogy detection. We empirically investigate the use of syntactic dependencies on improving analogy detection based on distributed word representations, showing that a dependency-based embeddings does not perform better than an ngram-based embeddings, but dependency structures can be used to improve analogy detection by filtering candidates. In addition, we show that distributed representations of dependency structure can be used for measuring relational similarities, thereby help analogy mining.

Leave-one-out Word Alignment without Garbage Collector Effects Xiaolin Wang, Masao Utiyama, Andrew Finch and Eiichiro Sumita

Expectation-maximization algorithms, such as those implemented in GIZA++ pervade the field of unsupervised word alignment. However, these algorithms have a problem of over-fitting, leading to ``garbage collector effects,'' where rare words tend to be erroneously aligned to untranslated words. This paper proposes a leave-one-out expectat