In this exercise, we will add document content as context in the LLM query.
Modify the DataService
class.
Add document
attribute with type Resource
and annotated with @Value("classpath:data/email.txt")
.
Add a new method getDocumentContent
that will read the content of the file and return it as a string.
public String getDocumentContent() {
try {
return document.getContentAsString(Charset.defaultCharset());
} catch (IOException e) {
throw new RuntimeException(e);
}
}
Modify the LLMService
class.
Add DataService
attribute and set it in the constructor by injection from Spring context.
Add PromptTemplate
attribute and initialize it in the constructor by passing the following hard-coded instructions as the argument.
Answer the question based on this context:
{context}
Question:
{question}
Update askQuestionAboutContext
method that will generate question from prompt template.
- Add
new UserMessage(question)
to history - Set the existing
userMessage
object by callingcreateMessage
method onPromptTemplate
object with map as argument - Return
getResponse
result with theuserMessage
object as argument
public Stream<String> askQuestionAboutContext(final String question) {
history.add(new UserMessage(question));
Message userMessage = userPromptTemplate.createMessage(
Map.of("context", dataService.getDocumentContent(),
"question", question));
return getResponse(userMessage);
}
If needed, the solution can be checked in the solution/exercise-3
folder.
- Make sure that ollama container is running
- Run the application
- In the application prompt, type
llmctx
command and ask a question about the email content. Here are some examples:llmctx What is the local currency ?
llmctx What is the airport ?
- Response can make time to be generated, please, be patient
- We also can ask the model to enrich context information
llmctx Give me the climate of the destination
llmctx How is the area of the reserve ?
We implemented information extraction of document just by appending the document content to the query. This simple action points some concepts:
- Context is passed as user input to the model
- LLM is able to complete context information with knowledge from training (but it can generate hallucinations)
- More the query is big, more the response time is long
- Spring AI provides
PromptTemplate
class to easily integrate some parameters in preformatted prompt content (useful for prompt library implementation)
The Retrieval Augmented Generation (RAG) is a part of response to crack the token limitation and make query more efficient. In the last exercise, we will discover how to implement the RAG pattern with Spring AI.