Skip to content

This project implements an image search system using CLIP (Contrastive Language-Image Pre-training) model. It allows users to search for images using natural language queries.

Notifications You must be signed in to change notification settings

PongpreechaSuea/Text-Search-Image

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Text-Search-Image

This project implements an image search system using CLIP (Contrastive Language-Image Pre-training) model. It allows users to search for images using natural language queries.

Features

  • Load and process images from a specified folder
  • Encode images using CLIP model
  • Store image embeddings in a vector database (ChromaDB)
  • Search for images using text queries
  • Display search results in a user-friendly interface

Requirements

  • Python 3.9.16
  • Flet: For creating the graphical user interface
  • Clip Model: For generating document embeddings
  • ChromaDB: For efficient storage and retrieval of vector embeddings

Installation

  1. Clone this repository:
git clone https://github.com/PongpreechaSuea/Text-Search-Image.git
cd Text-Search-Image
  1. Create a virtual environment (optional but recommended):
python -m venv test
source test/bin/activate  # On Windows, use `test\Scripts\activate`
  1. Install the required packages:
pip install -r requirements.txt

Usage

  1. Run the application:
python run.py
  1. In the application interface:

    • Enter the path to the folder containing your images
    • Click the "Add" button to load and process the images
    • Enter a text query in the search box
    • Click the "Search" button to find matching images

Project Structure

text_search_image_clip/
├── assets/
├── data/
├── src/
│   ├── config.py
│   ├── image_finder.py
│   └── image_generator.py
├── README.md
├── requirements.txt
└── run.py

Example

DATA EAMPLE

GUI

INTERFACE

GUI

RESULTS

GUI

About

This project implements an image search system using CLIP (Contrastive Language-Image Pre-training) model. It allows users to search for images using natural language queries.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages