This project demonstrates how to use Spring AI to connect with locally deployed Large Language Models (LLMs).
Spring AI is a Java-based application framework for building AI-powered applications. It enables developers to integrate generative AI into Java applications and supports various AI providers, including OpenAI, Bedrock, Ollama, and DeepSeek AI.
A Large Language Model (LLM) is an AI model designed to process and generate human-like text based on vast amounts of training data. These models are trained on billions of words from diverse sources such as books, articles, and websites. They can generate text, answer questions, and perform various NLP tasks.
Ollama is a software tool that allows users to run open-source LLMs locally. It enables AI processing without requiring an internet connection.
- Integration with a locally running LLM using Ollama.
- A user interface for prompt-based searches.
- LLM integration using the Spring AI (Java-based) framework.
Before running the application, ensure the following are installed:
- Backend: Java 17+ and Maven.
- An IDE: IntelliJ IDEA (or any preferred Java IDE).
- Ollama: To run the local LLM.
- Download the software from Ollama's official site.
- Install it on your system.
- Verify the installation by running:
ollama --version
- Pull/download an LLM model to run with Ollama:
ollama pull deepseek-r1:1.5b
- Run the model:
ollama run deepseek-r1:1.5b
git clone https://github.com/rjtmangla/Spring-AI-with-Ollama-Prompt.git
cd Spring-AI-with-Ollama-Prompt
Use Maven to clean and build the project:
mvn clean install
Start the application:
mvn spring-boot:run
Once the application is running, open your browser and visit:
http://localhost:8080
Spring-AI-with-Ollama-Prompt/
│── src/main/java
│ ├── com.gupta.ai.demo.SpringAiDemoApplication # Main application starter
│ ├── controller # REST controllers for API endpoints
│ ├── service # Business logic for AI functionalities
│── src/main/resources # Configuration files, templates, and static resources