Publications

Framing Question Answering as Building and Ranking Answer Justifications
Jansen, Sharp, Surdeanu, and Clark — Accepted to Computational Linguistics
Abstract

We propose a question answering (QA) approach that both identifies correct answers and produces compelling human-readable justifications for why those answers are correct. Our method first identifies the actual information need in a question using psycholinguistic concreteness norms, then uses this information need to construct answer justifications by aggregating multiple sentences from different knowledge bases using syntactic and lexical information. We then jointly rank answers and their justifications using a reranking perceptron that treats justification quality as a latent variable. We evaluate our method on 1,000 multiple-choice questions from elementary school science exams, and empirically demonstrate that it performs better than several strong baselines. Our best configuration answers 44% of the questions correctly, where the top justifications for 57% of these correct answers contain a compelling human-readable justification that explains the inference required to arrive at the correct answer. We include a detailed characterization of the justification quality for both our method and a strong information retrieval baseline, and show that information aggregation is key to addressing the information need in complex questions.

What’s in an Explanation? Characterizing Knowledge and Inference Requirements for Elementary Science Exams
Jansen, Balasubramanian, Surdeanu, and Clark — Accepted to COLING 2016
[data and tool]
Abstract

QA systems have been making steady advances in the challenging elementary science exam domain. In this work, we develop an explanation-based analysis of knowledge and inference requirements, which supports a fine-grained characterization of the challenges. In particular, we model the requirements based on appropriate sources of evidence to be used for the QA task. We create requirements by first identifying suitable sentences in a knowledge base that support the correct answer, then use these to build explanations, filling in any necessary missing information. These explanations are used to create a fine-grained categorization of the requirements. Using these requirements, we compare a retrieval and an inference solver on 212 questions. The analysis validates the gains of the inference solver, demonstrating that it answers more questions requiring complex inference, while also providing insights into the relative strengths of the solvers and knowledge sources. We release the annotated questions and explanations as a resource with broad utility for science exam QA, including determining knowledge base construction targets, as well as supporting information aggregation in automated inference.

Creating Causal Embeddings for Question Answering with Minimal Supervision
Sharp, Surdeanu, Jansen, Clark, Hammond — EMNLP 2016
Abstract

A common model for question answering (QA) is that a good answer is one that is closely related to the question, where relatedness is often determined using general-purpose lexical models such as word embeddings. We argue that a better approach is to look for answers that are related to the question in a relevant way, according to the information need of the question, which may be determined through task-specific embeddings. With causality as a use case, we implement this insight in three steps. First, we generate causal embeddings cost-effectively by bootstrapping cause-effect pairs extracted from free text using a small set of seed patterns. Second, we train dedicated embeddings over this data, by using task-specific contexts, i.e., the context of a cause is its effect. Finally, we extend a state-of-the-art reranking approach for QA to incorporate these causal embeddings. We evaluate the causal embedding models both directly with a casual implication task, and indirectly, in a downstream causal QA task using data from Yahoo! Answers. We show that explicitly modeling causality improves performance in both tasks. In the QA task our best model achieves 37.3% P@1, significantly outperforming a strong baseline by 7.7% (relative).

Spinning Straw into Gold: Using Free Text to Train Monolingual Alignment Models for Non-factoid Question Answering
Sharp, Jansen, Surdeanu, and Clark — NAACL 2015
[code and data]
Abstract

Monolingual alignment models have been shown to boost the performance of question answering systems by “bridging the lexical chasm” between questions and answers. The main limitation of these approaches is that they require semi-structured training data in the form of question-answer pairs, which is diffi cult to obtain in specialized domains or low-resource languages. We propose two inexpensive methods for training alignment models solely using free text, by generating artificial question-answer pairs from discourse structures. Our approach is driven by two representations of discourse: a shallow sequential representation, and a deep one based on Rhetorical Structure Theory. We evaluate the proposed model on two corpora from different genres and domains: one from Yahoo! Answers and one from the biology domain, and two types of non-factoid questions: manner and reason. We show that these alignment models trained directly from discourse structures imposed on free text improve performance considerably over an information retrieval baseline and a neural network language model trained on the same data.

Higher-order Lexical Semantic Models for Non-factoid Answer Reranking
Fried, Jansen, Hahn-Powell, Surdeanu, and Clark — TACL 2015
Abstract

Lexical semantic models provide robust performance for question answering, but, in general, can only capitalize on direct evidence seen during training. For example, monolingual alignment models acquire term alignment probabilities from semi-structured data such as question-answer pairs; neural network language models learn term embeddings from unstructured text. All this knowledge is then used to estimate the semantic similarity between question and answer candidates. We introduce a higher-order formalism that allows all these lexical semantic models to chain direct evidence to construct indirect associations between question and answer texts, by casting the task as the traversal of graphs that encode direct term associations. Using a corpus of 10,000 questions from Yahoo! Answers, we experimentally demonstrate that higher-order methods are broadly applicable to alignment and language models, across both word and syntactic representations. We show that an important criterion for success is controlling for the semantic drift that accumulates during graph traversal. All in all, the proposed higher-order approach improves five out of the six lexical semantic models investigated, with relative gains of up to +13\% over their first-order variants.

Discourse Complements Lexical Semantics for Non-factoid Answer Reranking
Jansen, Surdeanu, and Clark — ACL 2014
[code and data]
Abstract

We propose a robust answer reranking model for non-factoid questions that integrates lexical semantics with discourse information, driven by two representations of discourse: a shallow representation centered around discourse markers, and a deep one based on Rhetorical Structure Theory. We evaluate the proposed model on two corpora from different genres and domains: one from Yahoo! Answers and one from the biology domain, and two types of non-factoid questions: manner and reason. We experimentally demonstrate that the discourse structure of nonfactoid answers provides information that is complementary to lexical semantic similarity between question and answer, improving performance up to 24% (relative) over a state-of-the-art model that exploits lexical semantic similarity alone. We further demonstrate excellent domain transfer of discourse information, suggesting these discourse features have general utility to non-factoid question answering.

Transmitting Narrative: An Interactive Shift-Summarization Tool for Improving Nurse Communication.
Forbes, Surdeanu, Jansen, and Carrington — IEEE Interactive Visual Text Analytics Workshop 2013
Abstract

This paper describes an ongoing visualization project that aims to improve nurse communication. In particular, we in- vestigate the transmission of information that is related to potentially life-threatening clinical events. Currently these events may remain unnoticed or are misinterpreted by nurses, or most unfortunately, are simply not communicated clearly between nurses during a shift change, leading in some cases to catastrophic results. Our visualization system is based on a novel application of machine learning and natural language processing algorithms. Results are presented in the form of an interactive shift-summarization tool which augments existing Electronic Health Records (EHRs). This tool provides a high level overview of the patient’s health that is generated through an analysis of heterogeneous data: verbal summarizations de- scribing the patient’s health provided by the nurse in charge of the patient, the various monitored vital signs of the patient, and historical information of patients that had unexpected ad- verse reactions that were not foreseen by the receiving nurse despite being indicated by the responding nurse. In this pa- per, we introduce the urgent need for such a tool, describe the various components of our heterogeneous data analysis system, and present proposed enhancements to EHRs via the shift-summarization tool. This interactive, visual tool clearly indicates potential clinical events generated by our automated inferencing system; lets a nurse quickly verify the likelihood of these events; provides a mechanism for annotating the gen- erated events; and finally, makes it easy for a nurse to navigate the temporal aspects of patient data collected during a shift. This temporal data can then be used to interactively articu- late a narrative that more effectively transmits pertinent data to other nurses.

Adaptive feature-specific spectral imaging
Jansen, Dunlop, Golish, Gehm — SPIE 2012
Abstract

We present an architecture for rapid spectral classification in spectral imaging applications. By making use of knowledge gained in prior measurements, our spectral imaging system is able to design adaptive feature-specific measurement kernels that selectively attend to the portions of a spectrum that contain useful classification information. With measurement kernels designed using a probabilistically-weighted version of principal component analysis, simulations predict an orders-of- magnitude reduction in classification error rates. We report on our latest simulation results, as well as an experimental prototype currently under construction.

Development of a scalable image formation pipeline for multiscale gigapixel photography.
Golish, Vera, Kelly, Gong, Jansen, Hughes, Kittle, Brady, and Gehm — Optics Express 2012
Abstract

We report on the image formation pipeline developed to efficiently form gigapixel-scale imagery generated by the AWARE-2 multiscale camera. The AWARE-2 camera consists of 98 “microcameras” imaging through a shared spherical objective, covering a 120° x 50° field of view with approximately 40 microradian instantaneous field of view (the angular extent of a pixel). The pipeline is scalable, capable of producing imagery ranging in scope from “live” one megapixel views to full resolution gigapixel images. Architectural choices that enable trivially parallelizable algorithms for rapid image formation and on-the-fly microcamera alignment compensation are discussed.

Multiscale gigapixel photography
(High performance computing work in acknowledgements) Brady et al. — Nature 2012
Abstract

Pixel count is the ratio of the solid angle within a camera’s field of view to the solid angle covered by a single detector element. Because the size of the smallest resolvable pixel is proportional to aperture diameter and the maximum field of view is scale independent, the diffraction-limited pixel count is proportional to aperture area. At present, digital cameras operate near the fundamental limit of 1 to 10 megapixels for millimetre-scale apertures, but few approach the corresponding limits of 1 to 100 gigapixels for centimetre-scale apertures. Barriers to high-pixel-count imaging include scale-dependent geometric aberrations, the cost and complexity of gigapixel sensor arrays, and the computational and communications challenge of gigapixel image management. Here we describe the AWARE-2 camera, which uses a 16-mm entrance aperture to capture snapshot, one-gigapixel images at three frames per minute. AWARE-2 uses a parallel array of microcameras to reduce the problems of gigapixel imaging to those of megapixel imaging, which are more tractable. In cameras of conventional design, lens speed and field of view decrease as lens scale increases, but with the experimental system described here we confirm previous theoretical results suggesting that lens speed and field of view can be scale independent in microcamera-based imagers resolving up to 50 gigapixels. Ubiquitous gigapixel cameras may transform the central challenge of photography from the question of where to point the camera to that of how to mine the data.

Strong systematicity through sensorimotor conceptual grounding: an unsupervised, developmental approach to connectionist sentence processing.
Jansen and Watter — Connection Science 2012
Abstract

Connectionist language modelling typically has difficulty with syntactic systematicity, or the ability to generalise language learning to untrained sentences. This work develops an unsupervised connectionist model of infant grammar learning. Following the semantic boostrapping hypothesis, the network distils word category using a developmentally plausible infant-scale database of grounded sensorimotor conceptual representations, as well as a biologically plausible semantic co-occurrence activation function. The network then uses this knowledge to acquire an early benchmark clausal grammar using correlational learning, and further acquires separate conceptual and grammatical category representations. The network displays strongly systematic behaviour indicative of the general acquisition of the combinatorial systematicity present in the grounded infant-scale language stream, outperforms previous contemporary models that contain primarily noun and verb word categories, and successfully generalises broadly to novel untrained sensorimotor grounded sentences composed of unfamiliar nouns and verbs. Limitations as well as implications to later grammar learning are discussed.

A computational vector-map model of neonate saccades: Modulating the externality effect through refraction periods.
Jansen, Fiacconi, and Gibson — Vision Research 2010
Abstract

The present study develops an explicit and predictive computational model of neonate saccades based on the interaction of several simple mechanisms, including the tendency to fixate towards areas of high contrast, and the decay and recovery of a world-centered contrast representation simulating a low-level inhibition of return mechanism. Emergent properties similar to early visual behaviors develop, including the externality effect (or tendency to focus on external then internal features). The age-associated progression of this effect is modulated by the decay period of the model’s contrast representation, where the high-level behavior of either scanning broadly or locally is modulated by a single decay parameter.

SayWhen: an automated method for high-accuracy speech onset detection.
Jansen and Watter — Behavioral Research Methods 2008
[SayWhen website, tutorial, and tool download]
Abstract

Many researchers across many experimental domains utilize the latency of spoken responses as a dependent measure. These measurements are typically made using a voice key, an electronic device that monitors the amplitude of a voice signal, and detects when a predetermined threshold is crossed. Unfortunately, voice keys have been repeatedly shown to be alarmingly errorful and biased in accurately detecting speech onset latencies. We present SayWhen–an easy-to-use software system for offline speech onset latency measurement that (1) automatically detects speech onset latencies with high accuracy, well beyond voice key performance, (2) automatically detects and flags a subset of trials most likely to have mismeasured onsets, for optional manual checking, and (3) implements a graphical user interface that greatly speeds and facilitates the checking and correction of this flagged subset of trials. This automatic-plus-selective-checking method approaches the gold standard performance of full manual coding in a small fraction of the time.