semantic-image-search
is a web-based application that allows users to perform semantic searches on a collection of images using either textual descriptions or image inputs.
It leverages the OpenAI's CLIP model for embedding extraction and Milvus for efficient similarity search.
- Collection Management: Create, upload, and delete image collections.
- Search Capabilities: Perform image and text-based searches on image collections.
- Efficient Storage and Retrieval: Utilize Milvus for fast and scalable vector similarity search.
- Python 3.8+
- pytorch
- streamlit
- transformers
- pymilvus
-
Clone the repository:
git clone https://github.com/sefaburakokcu/semantic-image-search.git cd semantic-image-search
-
Install the required packages:
pip install -r requirements.txt
-
Run the application:
streamlit run app.py --server.maxUploadSize 2000
- Create Collection: Enter a name for the new collection and click "Create Collection."
- Upload Images to Collection: Select an existing collection, upload images or a zip file containing images, and click "Upload Images."
- Delete Collection: Select a collection to delete and click "Delete Collection."
- Search Configuration:
- Select the collection to search.
- Set the number of results and the number of results to display per row.
- Choose between text search and image search.
- Search Input:
- For text search, enter a search query.
- For image search, upload an image file.
- Search Results: Click "Search" to view results. Images are displayed with their similarity percentage.
This project is licensed under the MIT License. See the LICENSE file for details.
- Streamlit for the web application framework
- Milvus for the vector database
- OpenAI for the CLIP model from HuggingFace transformers
For any questions or issues, please open an issue on GitHub.