In this 4th and last post in our blog post series on pretrained transformer models for search we will introduce a cross-encoder model with all-to-all interaction between the query and the passage. We deploy this model as our final ranking stage in our multiphase retrieval and ranking pipeline. We submit our end to end retrieval and ranking result to the MS Marco Passage Ranking Leaderboard.

Photo by Patrick Hendry on Unsplash

In this blog series we demonstrate how to represent transformer models in a multiphase retrieval and ranking pipeline using Vespa.ai. We also evaluate these models on the largest Information Retrieval relevance dataset, namely the MS Marco Passage ranking dataset. We demonstrate how to achieve close to state of the art ranking using miniature transformer models with just 22M parameters, beating large ensemble models with billions of parameters.

In the first post in this series we introduced using pretrained language models for ranking and three popular methods for using them for text ranking. In the…


In this blog series we demonstrate how to represent transformer models in a multiphase retrieval and ranking pipeline using Vespa.ai. We also evaluate these models on the largest Information Retrieval relevance dataset, namely the MS Marco Passage ranking dataset. We demonstrate how to achieve close to state of the art ranking using miniature transformer models with just 22M parameters, beating large ensemble models with billions of parameters.

Photo by Frank Busch on Unsplash

In the first post in this series we introduced using pretrained language models for ranking and three popular methods for using them for text ranking. In the…


In this blog series we demonstrate how to represent transformer models in a multiphase retrieval and ranking pipeline using Vespa.ai. We also evaluate these models on the largest Information Retrieval relevance dataset, namely the MS Marco Passage ranking dataset. We demonstrate how to achieve close to state of the art ranking using miniature transformer models with just 22M parameters, beating large ensemble models with billions of parameters.

Photo by Rob Fuller on Unsplash

In the first post in this series we introduced using pre-trained models for ranking. In this second post we study efficient candidate retrievers which can be used…


In this blog series we demonstrate how to represent transformer models in a multiphase retrieval and ranking pipeline using Vespa.ai. We also evaluate these models on the largest Information Retrieval relevance dataset, namely the MS Marco Passage ranking dataset. We demonstrate how to achieve close to state of the art ranking using miniature transformer models with just 22M parameters, beating large ensemble models with billions of parameters.

Photo by Jamie Street on Unsplash

In this first post we give an introduction to Transformers for text ranking and three different methods of applying them for ranking. We also cover multiphase retrieval…


Photo by Markus Winkler on Unsplash

BERT (Bidirectional Encoder Representations from Transformers) turned 2 years a few days ago, and since its introduction it has been a revolution for Search and Information Retrieval. It has drastically improved the accuracy on many different information seeking tasks, be it answering questions or ranking documents, far beyond what was thought possible just a few years ago. …


Introduction

Holiday shopping season is upon us and it’s time for a blog post on E-commerce search and recommendation using Vespa.ai. Vespa.ai is used as the search and recommendation backend at multiple Yahoo e-commerce sites in Asia, like tw.buy.yahoo.com.

This blog post discusses some of the challenges in e-commerce search and recommendation, and shows how they can be solved using the features of Vespa.ai.

Photo by Jonas Leupe on Unsplash

Text matching and ranking in e-commerce search

E-commerce search have text ranking requirements where traditional text ranking features like BM25 or TF-IDF might produce poor results. For an introduction to some of the issues with TF-IDF/BM25 see the influence of TF-IDF algorithms in e-commerce…

Jo Kristian Bergum

Working on Vespa.ai

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store