The Multimodal CLIP Embedding Microservice provides a powerful solution for converting textual and visual data into high-dimensional vector embeddings. These embeddings capture the semantic essence of the input, enabling robust applications in multi-modal data processing, information retrieval, recommendation systems, and more.
- High Performance: Optimized for rapid and reliable embedding generation for text and images.
- Scalable: Capable of handling high-concurrency workloads, ensuring consistent performance under heavy loads.
- Easy Integration: Offers a simple API interface for seamless integration into diverse workflows.
- Customizable: Supports tailored configurations, including model selection and preprocessing adjustments, to fit specific requirements.
This service empowers users to configure and deploy embedding pipelines tailored to their needs.
To build the Docker image, execute the following commands:
cd ../../..
docker build -t opea/embedding:latest \
--build-arg https_proxy=$https_proxy \
--build-arg http_proxy=$http_proxy \
-f comps/embeddings/src/Dockerfile .
Use Docker Compose to start the service:
cd comps/embeddings/deployment/docker_compose/
docker compose up clip-embedding-server -d
Verify that the service is running by performing a health check:
curl http://localhost:6000/v1/health_check \
-X GET \
-H 'Content-Type: application/json'
The service supports OpenAI API-compatible requests.
-
Single Text Input:
curl http://localhost:6000/v1/embeddings \ -X POST \ -d '{"input":"Hello, world!"}' \ -H 'Content-Type: application/json'
-
Multiple Texts with Parameters:
curl http://localhost:6000/v1/embeddings \ -X POST \ -d '{"input":["Hello, world!","How are you?"], "dimensions":100}' \ -H 'Content-Type: application/json'