1 Introduction

The applications of neural network models, shallow or deep, to information retrieval (IR) tasks falls under the purview of neural IR. Over the years, machine learning methods—including neural networks—have been popularly employed in IR, such as in learning-to-rank (LTR) frameworks (Liu 2009). Recently, neural representation learning and neural models with deep architectures have demonstrated significant improvements in speech recognition, machine translation, and computer vision tasks (LeCun et al. 2015). Similar methods are now being explored by the IR community that may lead to new models and performance breakthroughs for retrieval scenarios.

In text retrieval, neural approaches may refer to the application of pre-trained term embeddings for ad-hoc retrieval (Ai et al. 2016; Diaz et al. 2016; Ganguly et al. 2015; Guo et al. 2016b; Kenter et al. 2016; Mitra et al. 2016; Roy et al. 2016; Vulić and Moens 2015; Zamani and Croft 2016a, b), or training deep neural networks end-to-end for the ranking task (Cohen and Croft 2016; Guo et al. 2016a; Huang et al. 2013; Mitra et al. 2017; Nanni et al. 2017; Pang et al. 2016a, b; Severyn and Moschitti 2015). But neural IR may also encompass the use of neural networks in proactive recommendations (Luukkonen et al. 2016; Van Gysel et al. 2017), query formulation and suggestions (Mitra 2015; Mitra and Craswell 2015; Sordoni et al. 2015), modelling user behaviour (Borisov et al. 2016a, b), entity ranking (Van Gysel et al. 2016a, b), conversational agents (Yan et al. 2016; Zhou et al. 2016), and multi-modal retrieval (Ma et al. 2015). The growing body of work in this area has been supplemented by an increasing number of recent workshops (Craswell et al. 2016a, b, 2017) and tutorials (Kenter et al. 2017; Li and Lu 2016; Mitra and Craswell 2017a, b). This special issue of the Information Retrieval journal provides an additional venue for the findings from research happening at the intersection of information retrieval and neural networks.

2 Overview of papers

We received 11 submissions in response to the call-for-papers first announced in August 2016. We present the four papers that were accepted from this pool. The first paper serves as a survey of the current body of literature in neural IR. The other three papers are focused on the application of neural models for retrieval in the context of text, image, and music, respectively. We briefly discuss these papers in the remainder of this section.

Onal et al. (2017) summarize the large body of current work in the area of neural IR. The survey starts with an overview of basic neural network concepts and popular models for text processing tasks. The body of this work then focuses on providing a broad taxonomy of neural models for different IR tasks and scenarios. The survey concludes with a reflection on lessons learned and potential future directions for the field.

Yang et al. (2017) investigate the use of pre-trained term embeddings for a Twitter classification task. They find that better performance is achieved when the data for training the text embeddings aligns with the classification dataset. They evaluate various hyper-parameter choices, including context window sizes and size of the embedding vectors, and report their findings and insights.

Carrara et al. (2017) focus on image retrieval based on short text queries. They propose multiple deep architectures that jointly learn representations of images and text queries for ranking, and report state-of-the art performance.

Wang et al. (2017) explore neural models for context-aware music recommendation. They propose the music2vec model for learning distributed representation of songs, and use the learnt representation in the recommendation task. The proposed model significantly outperforms baseline methods, particularly when the data is sparse.

3 Conclusion

The papers included in this special issue cover a broad range of IR scenarios spanning over text and multimedia retrieval. In addition, the survey paper provides a useful summary of current work that we believe will serve as useful reference for research in this area. Despite the growing body of work in neural IR, it is important to keep in mind that we are still in the early days of this field. The community is ready for exciting new breakthroughs, but we must also ground our enthusiasm in thorough empirical evaluation of these new methods and push for deeper understanding of their relationships to classical IR approaches in future work.