Skip to content

Commit

Permalink
Added Gemini integration
Browse files Browse the repository at this point in the history
  • Loading branch information
amithkoujalgi committed Jul 30, 2024
1 parent 28b890d commit 06467d1
Show file tree
Hide file tree
Showing 8 changed files with 116 additions and 26 deletions.
12 changes: 9 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,15 +41,16 @@ cloud-based.

| Log Source | Availability |
|------------|--------------|
| Log files | |
| Log files | |
| ELK Stack | |
| Graylog | |

#### LLM Integrations

| LLM Integration | Availability |
|-----------------|--------------|
| Ollama | ✔️ |
| Ollama | ✅️ |
| Gemini | ✅️ |
| OpenAI | |
| Amazon Bedrock | |

Expand Down Expand Up @@ -94,7 +95,11 @@ loguru run

```json
{
"num_chunks_to_return": 100,
"service": "gemini",
"gemini": {
"api_key": "your-api-key",
"llm_name": "gemini-1.5-flash"
},
"ollama": {
"hosts": [
"http://localhost:11434/"
Expand All @@ -105,6 +110,7 @@ loguru run
"temperature": 0.1
}
},
"num_chunks_to_return": 100,
"data_sources": [
{
"type": "filesystem",
Expand Down
59 changes: 58 additions & 1 deletion README.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,61 @@
Loguru CLI
==========

An interactive commandline interface that brings intelligence to your logs.
.. image:: https://raw.githubusercontent.com/Loguru-AI/Loguru-CLI/main/images/loguru-small.png
:align: center

.. epigraph:: An interactive commandline interface that brings intelligence to your logs.



*********************
What is it?
*********************

**Loguru-CLI** (read as "Log Guru" 📋🧘) is a Python package that brings intelligence to your logs. It is designed to be a universal tool for log aggregation and analysis, with seamless integrations with any LLM (Large Language Model), whether self-hosted or cloud-based.

For more details, check out our GitHub repository: https://github.com/Loguru-AI/Loguru-CLI

*********************
Features
*********************

* Leverage LLMs to gain insights from your logs.
* Easily integrate with any LLM (self-hosted or cloud-service offerings).
* Easily hook up any log sources to gain insights on your logs. Perform refined/advanced queries supported by the
logging platform/tool (by applying capabilities such as function-calling (tooling) of LLM) and gain insights on the
results.
* Save and replay history.
* Scan and rebuild index from your logs.

.. tip:: Currently supports filesystem-based logs only, with plans to extend support to more log sources soon.

*********************
Getting Started
*********************

Install Loguru::

pip install loguru-cli

Show config::

loguru show-config

Scan and rebuild index from log files::

loguru scan

Run app::

loguru run

Example Interaction:

.. code-block:: javascript
>>> List all the errors
1. The error message indicates that there is a problem connecting to the PostgreSQL database at localhost on port 5432. Specifically, it says "Connection refused". This means that either the hostname or port number is incorrect, or the postmaster (the process that manages the PostgreSQL server) is not accepting TCP/IP connections.
2. The stack trace shows that the problem is occurring in the HikariCP connection pool, which is being used to manage connections to the database. Specifically, it says "Exception during pool initialization". This suggests that there may be a problem with the configuration of the connection pool or the database connection settings.
3. It is also possible that there is a firewall or network issue preventing the connection from being established. For example, if there is a firewall on the server running PostgreSQL, it may be blocking incoming connections on port 5432.
Binary file added images/loguru-small.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/loguru.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
49 changes: 34 additions & 15 deletions loguru/core/fs_log_rag.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
from langchain_community.vectorstores import FAISS
from langchain_core.documents import Document
from langchain_core.prompts import PromptTemplate
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_huggingface import HuggingFaceEmbeddings

from loguru import LOGURU_DATA_DIR, HUGGING_FACE_EMBEDDINGS_DEVICE_TYPE
Expand Down Expand Up @@ -164,7 +165,8 @@ def ask(self, question: str, stream: bool = False) -> tuple[str, list[Document]]
You are an honest assistant.
You will accept contents of a log file and you will answer the question asked by the user appropriately.
If you don't know the answer, just say you don't know. Don't try to make up an answer.
If you find time, date or timestamps in the logs, make sure to convert the timestamp to more human-readable format in your response as DD/MM/YYYY HH:SS
If you find time, date or timestamps in the logs,
make sure to convert the timestamp to more human-readable format in your response as DD/MM/YYYY HH:SS
### Context:
{context}
Expand All @@ -176,21 +178,38 @@ def ask(self, question: str, stream: bool = False) -> tuple[str, list[Document]]
"""

prompt = PromptTemplate.from_template(template)
llm = ChatOllama(
temperature=0,
base_url=self._ollama_api_base_url,
model=self._model_name,
streaming=True,
# seed=2,
top_k=10,
# A higher value (100) will give more diverse answers, while a lower value (10) will be more conservative.
top_p=0.3,
# Higher value (0.95) will lead to more diverse text, while a lower value (0.5) will generate more
# focused text.
num_ctx=3072, # Sets the size of the context window used to generate the next token.
verbose=False
)

service = self._config.service

llm = None
if service == 'ollama':
llm = ChatOllama(
temperature=0,
base_url=self._ollama_api_base_url,
model=self._model_name,
streaming=True,
# seed=2,
top_k=10,
# A higher value (100) will give more diverse answers, while a lower value (10) will be more conservative.
top_p=0.3,
# Higher value (0.95) will lead to more diverse text, while a lower value (0.5) will generate more
# focused text.
num_ctx=3072, # Sets the size of the context window used to generate the next token.
verbose=False
)
elif service == 'gemini':
# https://python.langchain.com/v0.2/docs/integrations/chat/google_generative_ai/
llm = ChatGoogleGenerativeAI(
model=self._config.gemini.llm_name,
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
google_api_key=self._config.gemini.api_key
)
else:
services = ['ollama', 'gemini']
print(f"Invalid service: {service}. Available services are {','.join(services)}")
if stream:
llm.callbacks = [StreamingStdOutCallbackHandler()]

Expand Down
7 changes: 7 additions & 0 deletions loguru/core/models/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,14 @@ class DataSource(BaseModel):
ds_params: Params = Field(..., description="Parameters for the data source")


class Gemini(BaseModel):
api_key: str = Field(..., description="Gemini API Key")
llm_name: str = Field(..., description="Gemini Model Name. Ex: gemini-1.5-flash, gemini-1.5-pro")


class Config(BaseModel):
num_chunks_to_return: int = Field(..., description="Number of chunks to return")
service: str = Field(..., description="LLM service type. Ex: ollama, gemini")
ollama: Ollama = Field(..., description="Ollama configuration")
gemini: Optional[Gemini] = Field(..., description="Gemini configuration")
data_sources: List[DataSource] = Field(..., description="List of data sources")
3 changes: 2 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,5 @@ langchain-experimental==0.0.62
langchain-huggingface==0.0.3
sentence-transformers==3.0.1
#faiss-gpu==1.7.2
faiss-cpu==1.8.0.post1
faiss-cpu==1.8.0.post1
langchain-google-genai==1.0.8
12 changes: 6 additions & 6 deletions setup.py
Original file line number Diff line number Diff line change
@@ -1,25 +1,25 @@
import os
import re
import sys
from m2r import parse_from_file

from setuptools import setup, find_packages


def get_requirements_to_install():
__curr_location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
requirements_txt_file_as_str = f"{__curr_location__}/requirements.txt"
with open(requirements_txt_file_as_str, 'r') as reqfile:
libs = reqfile.readlines()
with open(requirements_txt_file_as_str, 'r') as req_file:
libs = req_file.readlines()
for i in range(len(libs)):
libs[i] = libs[i].replace('\n', '')
return libs


def get_description() -> str:
__curr_location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
requirements_txt_file_as_str = f'{__curr_location__}/README.rst'
with open(requirements_txt_file_as_str, 'r') as reqfile:
desc = reqfile.read()
rst_txt_file_as_str = f'{__curr_location__}/README.rst'
with open(rst_txt_file_as_str, 'r') as rst_file:
desc = rst_file.read()
return desc


Expand Down

0 comments on commit 06467d1

Please sign in to comment.