Skip to content

A semantic search engine built using transformer models and vector search

License

Notifications You must be signed in to change notification settings

nikhil-1e9/transformer-search-engine

Repository files navigation

Semantic Search Engine with Sentence Transformers and Vector Embeddings

This project demonstrates how to build a semantic search engine using pre-trained transformer models with Hugging Face's Sentence Transformers library and vector embeddings. Unlike traditional keyword-based search, semantic search understands the meaning behind queries and retrieves the most relevant results based on the context.

How it works?

Input text (documents or queries) is converted into dense vector representations (also known as vector embeddings) using a transformer model. These vector embeddings are then stored in a vector database for efficient retrieval. When a query is entered during search time, its embedding is compared with indexed document embeddings using cosine similarity metric and the most semantically similar documents are returned as search results.

About

A semantic search engine built using transformer models and vector search

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published