Skip to content

anhaidgroup/sparkly

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

98 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

license

Sparkly

Welcome to Sparkly! Sparkly is a TF/IDF top-k blocking for entity matching system built on top of Apache Spark and PyLucene.

Paper and Data

A link to our paper can be found here. Data used in the paper can be found here.

Quick Start: Sparkly in 30 Seconds

There are three main steps to running Sparkly,

  1. Reading Data
spark = SparkSession.builder.getOrCreate()

table_a = spark.read.parquet('./examples/data/abt_buy/table_a.parquet')
table_b = spark.read.parquet('./examples/data/abt_buy/table_b.parquet')
  1. Index Building
config = IndexConfig(id_col='_id')
config.add_field('name', ['3gram'])

index = LuceneIndex('/tmp/example_index/', config)
index.upsert_docs(table_a)
  1. Blocking
query_spec = index.get_full_query_spec()

candidates = Searcher(index).search(table_b, query_spec, id_col='_id', limit=50)
candidates.show()

Installing Dependencies

Python

Sparkly has been tested for Python 3.10 on Ubuntu 22.04.

PyLucene

Unfortunately PyLucene is not available in PyPI, to install PyLucene see PyLucene docs. Sparkly has been tested with PyLucene 9.4.1.

Other Requirements

Once PyLucene has been installed, Sparkly can be installed with pip by running the following command in the root directory of this repository.

$ python3 -m pip install .

Tutorials

To get started with Sparkly we recommend starting with the IPython notebook included with the repository examples/example.ipynb.

Additional examples of how to use Sparkly are provided under the examples/ directory in this repository.

How It Works

Sparkly is built to do blocking for entity matching. There have been many solutions developed to address this problem, from basic SQL joins to deep learning based approaches. Sparkly takes a top-k approach to blocking, in particular, each search record is paired with the top-k records with the highest BM25 scores. In terms of SQL this might look something like executing this query for each record,

SELECT id, BM25(<QUERY>, name) AS score 
FROM table_a 
ORDER BY score DESC
LIMIT <K>;

where QUERY derived from the search record.

This kind of search is very common in information retrieval and keyword search applications. In fact, this is exactly what Apache Lucene is designed to do. While this form of search produces high quality results, it can also be very compute intensive, hence to speed up search, we leverage PySpark to distribute the computation. By using PySpark we can easily leverage a large number of machines to perform search without having to rely on approximation algorithms.

API Docs

API docs can be found here

Tips for Installing PyLucene

For tips on installing PyLucene take a look at this readme.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages