Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🚀 Add Ruff Linter #1651

Open
wants to merge 179 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
179 commits
Select commit Hold shift + click to select a range
5cc40d7
Create linter.yml
Smartappli Aug 2, 2024
fb825d5
Create fixer.yml
Smartappli Aug 2, 2024
2b513d0
Rename .github/fixer.yml to .github/workflows/fixer.yml
Smartappli Aug 2, 2024
41adecd
style fixes by ruff
Smartappli Aug 2, 2024
72ecb43
Lint
Smartappli Aug 2, 2024
57589a0
Lint
Smartappli Aug 2, 2024
f0a719c
Lint
Smartappli Aug 2, 2024
8b3cf53
Lint
Smartappli Aug 2, 2024
bab328c
Lint
Smartappli Aug 2, 2024
70a18f3
Lint
Smartappli Aug 2, 2024
c4dd629
Lint
Smartappli Aug 2, 2024
4aa4000
Lint
Smartappli Aug 2, 2024
9a7a0cd
Lint
Smartappli Aug 2, 2024
7ba0fa4
Lint
Smartappli Aug 2, 2024
fd20768
Lint
Smartappli Aug 2, 2024
3f75d69
Lint
Smartappli Aug 2, 2024
6c25880
Lint
Smartappli Aug 2, 2024
9bdb1d7
Lint
Smartappli Aug 2, 2024
3665428
Lint
Smartappli Aug 2, 2024
39cd742
Lint
Smartappli Aug 2, 2024
63d0418
style fixes by ruff
Smartappli Aug 2, 2024
8aae026
Lint
Smartappli Aug 2, 2024
567625f
Lint
Smartappli Aug 2, 2024
8c5817f
Lint
Smartappli Aug 2, 2024
28c539d
Lint
Smartappli Aug 2, 2024
4c14738
Lint
Smartappli Aug 2, 2024
d445fdb
Lint
Smartappli Aug 2, 2024
bd03dfb
Lint
Smartappli Aug 2, 2024
074b5ce
Lint
Smartappli Aug 2, 2024
4bbb41c
Lint
Smartappli Aug 2, 2024
e7b7fc7
Lint
Smartappli Aug 2, 2024
e7774f0
Lint
Smartappli Aug 2, 2024
d980fa9
Lint
Smartappli Aug 2, 2024
a55649c
Lint
Smartappli Aug 2, 2024
707d972
Lint
Smartappli Aug 2, 2024
aa68377
Lint
Smartappli Aug 2, 2024
d51124a
Lint
Smartappli Aug 2, 2024
1564d24
Lint
Smartappli Aug 2, 2024
3b23dba
Lint
Smartappli Aug 2, 2024
03a8d12
Lint
Smartappli Aug 2, 2024
27dbd64
Lint
Smartappli Aug 2, 2024
e211d7b
Lint
Smartappli Aug 2, 2024
2ae2596
Delete .github/workflows/fixer.yml
Smartappli Aug 2, 2024
2286565
Lint
Smartappli Aug 2, 2024
7481f96
Lint
Smartappli Aug 2, 2024
8ed98ae
Update __init__.py
Smartappli Aug 2, 2024
f2fbc90
Update __init__.py
Smartappli Aug 2, 2024
376fdb0
Update linter.yml
Smartappli Aug 2, 2024
4006799
Lint
Smartappli Aug 2, 2024
795465a
Lint
Smartappli Aug 2, 2024
d1f88ba
Lint
Smartappli Aug 2, 2024
0c0f1dc
Lint
Smartappli Aug 2, 2024
49e9aa7
Lint
Smartappli Aug 2, 2024
0e94977
Lint
Smartappli Aug 2, 2024
f81bd5e
Add YTT, COM, ANN. DTZ rules
Smartappli Aug 2, 2024
089e4f4
Lint
Smartappli Aug 2, 2024
00cd30d
Lint
Smartappli Aug 2, 2024
86cf326
Lint
Smartappli Aug 2, 2024
7b0dcbc
Lint
Smartappli Aug 2, 2024
1e782f7
Lint
Smartappli Aug 2, 2024
9d8ecd1
Lint
Smartappli Aug 2, 2024
ff353bc
Lint
Smartappli Aug 2, 2024
1a40080
Lint
Smartappli Aug 2, 2024
f7e3b2f
Lint
Smartappli Aug 2, 2024
9544866
Lint
Smartappli Aug 2, 2024
bcf8d9b
Lint
Smartappli Aug 2, 2024
92f5221
Lint
Smartappli Aug 2, 2024
be20a80
Lint
Smartappli Aug 2, 2024
ebfb375
Lint
Smartappli Aug 2, 2024
77e88f1
Lint
Smartappli Aug 2, 2024
907e65f
Lint
Smartappli Aug 2, 2024
79b563d
Update Chat.py
Smartappli Aug 2, 2024
be3c02d
Lint
Smartappli Aug 2, 2024
054b29b
Lint
Smartappli Aug 2, 2024
d2ca3d8
Lint
Smartappli Aug 2, 2024
b5fc1da
Lint
Smartappli Aug 2, 2024
c355e7c
Lint
Smartappli Aug 2, 2024
65262eb
Lint
Smartappli Aug 2, 2024
bb88cb3
Lint
Smartappli Aug 2, 2024
28e11f6
Lint
Smartappli Aug 2, 2024
749474b
Lint
Smartappli Aug 2, 2024
7e9bde6
Lint
Smartappli Aug 2, 2024
4056760
Lint
Smartappli Aug 2, 2024
c6c4f8c
Lint
Smartappli Aug 2, 2024
142d2c6
Lint
Smartappli Aug 2, 2024
4ebb74e
Lint
Smartappli Aug 2, 2024
0a2753a
Lint
Smartappli Aug 2, 2024
7921e76
Lint
Smartappli Aug 2, 2024
3564f2d
Lint
Smartappli Aug 2, 2024
a0a58d8
Lint
Smartappli Aug 2, 2024
7bded16
Lint
Smartappli Aug 2, 2024
cc68145
bugfix
Smartappli Aug 2, 2024
dd962f8
Lint
Smartappli Aug 2, 2024
019e580
add C90 rules
Smartappli Aug 2, 2024
dc70fb0
add ERA and FA rules
Smartappli Aug 2, 2024
63b25f1
add FAST, FIX and FLY rules
Smartappli Aug 2, 2024
f3f905b
add AIR, PT and PGH rules
Smartappli Aug 2, 2024
17d1a1f
add ARG rules
Smartappli Aug 2, 2024
7850a96
add BLE rules
Smartappli Aug 2, 2024
7d639bd
add ICN, ISC rules
Smartappli Aug 2, 2024
0d59366
Update linter.yml
Smartappli Aug 2, 2024
9c2df8b
Update linter.yml
Smartappli Aug 2, 2024
08692dd
Create fixer.yml
Smartappli Aug 2, 2024
069fb9a
Update fixer.yml
Smartappli Aug 2, 2024
53f7b08
style fixes by ruff
Smartappli Aug 2, 2024
02b87f3
Update linter.yml
Smartappli Aug 2, 2024
6679172
Delete .github/workflows/fixer.yml
Smartappli Aug 2, 2024
35533fd
add new rules
Smartappli Aug 3, 2024
ee5951d
add more rules
Smartappli Aug 3, 2024
f8b8f41
Lint
Smartappli Aug 3, 2024
cf095ee
Lint
Smartappli Aug 3, 2024
e86d973
Lint
Smartappli Aug 3, 2024
20fad27
Lint
Smartappli Aug 3, 2024
5428cce
Lint
Smartappli Aug 3, 2024
ea5ced2
add more rules
Smartappli Aug 3, 2024
a5eec36
activation of preview
Smartappli Aug 3, 2024
4b30a17
Lint
Smartappli Aug 3, 2024
f047d4c
Lint
Smartappli Aug 3, 2024
ac2b640
Lint
Smartappli Aug 3, 2024
06d285c
Lint
Smartappli Aug 3, 2024
1555e21
Update llama_tokenizer.py
Smartappli Aug 3, 2024
5fdb66c
Lint
Smartappli Aug 3, 2024
9ceadda
Lint
Smartappli Aug 3, 2024
ed2f893
Lint
Smartappli Aug 3, 2024
ac25612
Update linter.yml
Smartappli Aug 3, 2024
5bdad76
Lint
Smartappli Aug 3, 2024
ecd7890
Lint
Smartappli Aug 3, 2024
a66ccb8
Merge branch 'main' into Linters
Smartappli Aug 14, 2024
19d5afe
Linter
Smartappli Aug 15, 2024
52200d5
Lint
Smartappli Aug 15, 2024
84532c2
Create ci.yml
Smartappli Aug 15, 2024
a5e7f87
Create .pre-commit-config.yaml
Smartappli Aug 15, 2024
16df7d6
Create ruff.toml
Smartappli Aug 15, 2024
cb04843
Update pyproject.toml
Smartappli Aug 15, 2024
c1d61ad
Update ci.yml
Smartappli Aug 15, 2024
a3a43d5
Update ci.yml
Smartappli Aug 15, 2024
68dd940
Update ci.yml
Smartappli Aug 15, 2024
f03b223
Update ci.yml
Smartappli Aug 15, 2024
a3b5956
Delete .github/workflows/linter.yml
Smartappli Aug 15, 2024
b32fd7d
Update .pre-commit-config.yaml
Smartappli Aug 15, 2024
2dc7394
exclude rule T201
Smartappli Aug 15, 2024
49b5434
exclude rule ERA001
Smartappli Aug 15, 2024
772c5b3
Update ruff.toml
Smartappli Aug 15, 2024
d04fab6
Update ruff.toml
Smartappli Aug 15, 2024
d028543
Create fixer.yml
Smartappli Aug 15, 2024
e1e861e
style fixes by ruff
Smartappli Aug 15, 2024
d8d9c4d
Lint E711
Smartappli Aug 15, 2024
f747d46
Update llama_types.py
Smartappli Aug 15, 2024
91ce8ac
style fixes by ruff
Smartappli Aug 15, 2024
e33d41b
Delete .github/workflows/fixer.yml
Smartappli Aug 15, 2024
03f8007
Update settings.py
Smartappli Aug 15, 2024
85b1930
Update llama_types.py
Smartappli Aug 15, 2024
a9a7558
Update llama_cpp.py
Smartappli Aug 15, 2024
b58c8b5
Update ruff.toml
Smartappli Aug 15, 2024
683868b
Update llama_types.py
Smartappli Aug 15, 2024
a5f16dc
Update llama_types.py
Smartappli Aug 15, 2024
a35e3f2
Update llama_cache.py
Smartappli Aug 15, 2024
ca3fc20
Update llama_cpp.py
Smartappli Aug 15, 2024
9a13636
Update llama.py
Smartappli Aug 15, 2024
e15563f
Update app.py
Smartappli Aug 15, 2024
82bead9
Update types.py
Smartappli Aug 15, 2024
2d9ee84
Update errors.py
Smartappli Aug 15, 2024
e133736
Update cli.py
Smartappli Aug 15, 2024
5af3b53
Update model.py
Smartappli Aug 15, 2024
4494458
Update settings.py
Smartappli Aug 15, 2024
55ba31a
Update test.yaml
Smartappli Aug 15, 2024
42e15fe
Update test.yaml
Smartappli Aug 15, 2024
00d9c9c
Update test.yaml
Smartappli Aug 15, 2024
09d4a2e
Update test.yaml
Smartappli Aug 15, 2024
8614ee7
Update test.yaml
Smartappli Aug 15, 2024
022d2db
Update test.yaml
Smartappli Aug 15, 2024
8e111ce
Update ruff.toml
Smartappli Aug 15, 2024
34c2087
Update ruff.toml
Smartappli Aug 15, 2024
28e4dbd
Update .pre-commit-config.yaml
Smartappli Aug 29, 2024
04d2868
Update .pre-commit-config.yaml
Smartappli Aug 31, 2024
1ac586f
Update .pre-commit-config.yaml
Smartappli Aug 31, 2024
6fd538c
Update .pre-commit-config.yaml
Smartappli Sep 7, 2024
d2b26f8
Merge branch 'main' into Linters
abetlen Sep 22, 2024
2dd0fce
Update .pre-commit-config.yaml
Smartappli Sep 22, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
name: ci

on:
pull_request:
push:
branches: [main]

jobs:
pre-commit:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- uses: actions/checkout@v4
# with:
# ref: ${{ github.head_ref }}
- uses: actions/setup-python@v5
- uses: pre-commit/action@v3.0.1
# - uses: stefanzweifel/git-auto-commit-action@v5
# with:
# commit_message: 'pre commit fixes'
25 changes: 25 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
repos:
# auto update
- repo: https://gitlab.com/vojko.pribudic.foss/pre-commit-update
rev: "v0.5.0"
hooks:
- id: pre-commit-update
args: [--dry-run, --all-versions]

# ruff
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: "v0.6.7"
hooks:
# Run the linter.
- id: ruff
types_or: [ python, pyi, jupyter ]
args: [ --fix ]
# Run the formatter.
- id: ruff-format
types_or: [ python, pyi, jupyter ]

- repo: https://github.com/pre-commit/mirrors-mypy
rev: "v1.11.2"
hooks:
- id: mypy
args: [ '--ignore-missing-imports', '--disable-error-code=top-level-await', "--disable-error-code=empty-body" ]
60 changes: 30 additions & 30 deletions docker/open_llama/hug_model.py
Original file line number Diff line number Diff line change
@@ -1,26 +1,27 @@
import requests
import argparse
import json
import os
import struct
import argparse

import requests


def make_request(url, params=None):
print(f"Making request to {url}...")
response = requests.get(url, params=params)
if response.status_code == 200:
return json.loads(response.text)
else:
print(f"Request failed with status code {response.status_code}")
return None
print(f"Request failed with status code {response.status_code}")
return None

def check_magic_and_version(filename):
with open(filename, 'rb') as f:
with open(filename, "rb") as f:
# Read the first 6 bytes from the file
data = f.read(6)

# Unpack the binary data, interpreting the first 4 bytes as a little-endian unsigned int
# and the next 2 bytes as a little-endian unsigned short
magic, version = struct.unpack('<I H', data)
magic, version = struct.unpack("<I H", data)

print(f"magic: 0x{magic:08x}, version: 0x{version:04x}, file: {filename}")

Expand All @@ -30,17 +31,17 @@ def download_file(url, destination):
print(f"Downloading {url} to {destination}...")
response = requests.get(url, stream=True)
if response.status_code == 200:
with open(destination, 'wb') as f:
with open(destination, "wb") as f:
total_downloaded = 0
for chunk in response.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
total_downloaded += len(chunk)
if total_downloaded >= 10485760: # 10 MB
print('.', end='', flush=True)
print(".", end="", flush=True)
total_downloaded = 0
print("\nDownload complete.")

# Creating a symbolic link from destination to "model.bin"
if os.path.isfile("model.bin"):
os.remove("model.bin") # remove the existing link if any
Expand All @@ -61,30 +62,29 @@ def get_user_choice(model_list):
if 0 <= index < len(model_list):
# Return the chosen model
return model_list[index]
else:
print("Invalid choice.")
print("Invalid choice.")
except ValueError:
print("Invalid input. Please enter a number corresponding to a model.")
except IndexError:
print("Invalid choice. Index out of range.")

return None

def main():
# Create an argument parser
parser = argparse.ArgumentParser(description='Process some parameters.')
parser = argparse.ArgumentParser(description="Process some parameters.")

# Arguments
parser.add_argument('-v', '--version', type=int, default=0x0003,
help='hexadecimal version number of ggml file')
parser.add_argument('-a', '--author', type=str, default='TheBloke',
help='HuggingFace author filter')
parser.add_argument('-t', '--tag', type=str, default='llama',
help='HuggingFace tag filter')
parser.add_argument('-s', '--search', type=str, default='',
help='HuggingFace search filter')
parser.add_argument('-f', '--filename', type=str, default='q5_1',
help='HuggingFace model repository filename substring match')
parser.add_argument("-v", "--version", type=int, default=0x0003,
help="hexadecimal version number of ggml file")
parser.add_argument("-a", "--author", type=str, default="TheBloke",
help="HuggingFace author filter")
parser.add_argument("-t", "--tag", type=str, default="llama",
help="HuggingFace tag filter")
parser.add_argument("-s", "--search", type=str, default="",
help="HuggingFace search filter")
parser.add_argument("-f", "--filename", type=str, default="q5_1",
help="HuggingFace model repository filename substring match")

# Parse the arguments
args = parser.parse_args()
Expand All @@ -96,20 +96,20 @@ def main():
"search": args.search
}

models = make_request('https://huggingface.co/api/models', params=params)
models = make_request("https://huggingface.co/api/models", params=params)
if models is None:
return

model_list = []
# Iterate over the models
for model in models:
model_id = model['id']
model_info = make_request(f'https://huggingface.co/api/models/{model_id}')
model_id = model["id"]
model_info = make_request(f"https://huggingface.co/api/models/{model_id}")
if model_info is None:
continue

for sibling in model_info.get('siblings', []):
rfilename = sibling.get('rfilename')
for sibling in model_info.get("siblings", []):
rfilename = sibling.get("rfilename")
if rfilename and args.filename in rfilename:
model_list.append((model_id, rfilename))

Expand All @@ -135,5 +135,5 @@ def main():
print("Error - model choice was None")
exit(2)

if __name__ == '__main__':
if __name__ == "__main__":
main()
2 changes: 0 additions & 2 deletions examples/batch-processing/server.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,6 @@

app = FastAPI()

import openai.types.chat as types


@app.post("/v1/chat/completions")
def create_chat_completions():
Expand Down
8 changes: 4 additions & 4 deletions examples/gradio_chat/local.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
import gradio as gr

import llama_cpp
import llama_cpp.llama_tokenizer

import gradio as gr

llama = llama_cpp.Llama.from_pretrained(
repo_id="Qwen/Qwen1.5-0.5B-Chat-GGUF",
filename="*q8_0.gguf",
tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained(
"Qwen/Qwen1.5-0.5B"
"Qwen/Qwen1.5-0.5B",
),
verbose=False,
)
Expand All @@ -25,7 +25,7 @@ def predict(message, history):
messages.append({"role": "user", "content": message})

response = llama.create_chat_completion_openai_v1(
model=model, messages=messages, stream=True
model=model, messages=messages, stream=True,
)

text = ""
Expand Down
3 changes: 1 addition & 2 deletions examples/gradio_chat/server.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
import gradio as gr

from openai import OpenAI

client = OpenAI(base_url="http://localhost:8000/v1", api_key="llama.cpp")
Expand All @@ -17,7 +16,7 @@ def predict(message, history):
messages.append({"role": "user", "content": message})

response = client.chat.completions.create(
model=model, messages=messages, stream=True
model=model, messages=messages, stream=True,
)

text = ""
Expand Down
3 changes: 1 addition & 2 deletions examples/hf_pull/main.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@
import llama_cpp
import llama_cpp.llama_tokenizer


llama = llama_cpp.Llama.from_pretrained(
repo_id="Qwen/Qwen1.5-0.5B-Chat-GGUF",
filename="*q8_0.gguf",
tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained(
"Qwen/Qwen1.5-0.5B"
"Qwen/Qwen1.5-0.5B",
),
verbose=False,
)
Expand Down
3 changes: 2 additions & 1 deletion examples/high_level_api/fastapi_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
"""

import os

import uvicorn

from llama_cpp.server.app import create_app
Expand All @@ -34,5 +35,5 @@
app = create_app()

uvicorn.run(
app, host=os.getenv("HOST", "localhost"), port=int(os.getenv("PORT", 8000))
app, host=os.getenv("HOST", "localhost"), port=int(os.getenv("PORT", 8000)),
)
2 changes: 1 addition & 1 deletion examples/high_level_api/high_level_api_inference.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import json
import argparse
import json

from llama_cpp import Llama

Expand Down
2 changes: 1 addition & 1 deletion examples/high_level_api/high_level_api_infill.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,5 +33,5 @@
filtered = True

print(
f"Fill-in-Middle completion{' (filtered)' if filtered else ''}:\n\n{args.prompt}\033[32m{response}\033[{'33' if filtered else '0'}m{args.suffix}\033[0m"
f"Fill-in-Middle completion{' (filtered)' if filtered else ''}:\n\n{args.prompt}\033[32m{response}\033[{'33' if filtered else '0'}m{args.suffix}\033[0m",
)
2 changes: 1 addition & 1 deletion examples/high_level_api/high_level_api_streaming.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import json
import argparse
import json

from llama_cpp import Llama

Expand Down
13 changes: 7 additions & 6 deletions examples/high_level_api/langchain_custom_llm.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
import argparse

from llama_cpp import Llama
from collections.abc import Mapping
from typing import Any, List, Optional

from langchain.llms.base import LLM
from typing import Optional, List, Mapping, Any

from llama_cpp import Llama


class LlamaLLM(LLM):
Expand All @@ -19,7 +20,7 @@ def __init__(self, model_path: str, **kwargs: Any):
llm = Llama(model_path=model_path)
super().__init__(model_path=model_path, llm=llm, **kwargs)

def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
def _call(self, prompt: str, stop: list[str] | None = None) -> str:
response = self.llm(prompt, stop=stop or [])
return response["choices"][0]["text"]

Expand All @@ -37,13 +38,13 @@ def _identifying_params(self) -> Mapping[str, Any]:

# Basic Q&A
answer = llm(
"Question: What is the capital of France? Answer: ", stop=["Question:", "\n"]
"Question: What is the capital of France? Answer: ", stop=["Question:", "\n"],
)
print(f"Answer: {answer.strip()}")

# Using in a chain
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate

prompt = PromptTemplate(
input_variables=["product"],
Expand Down
7 changes: 5 additions & 2 deletions examples/low_level_api/Chat.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
#!/bin/python
import sys, os, datetime
import datetime
import os
import sys

from common import GptParams
from low_level_api_chat_cpp import LLaMAInteract

Expand Down Expand Up @@ -48,7 +51,7 @@ def env_or_def(env, default):
{USER_NAME}: What time is it?
{AI_NAME}: It is {DATE_TIME}.
{USER_NAME}:""" + " ".join(
sys.argv[1:]
sys.argv[1:],
)

print("Loading model...")
Expand Down
6 changes: 4 additions & 2 deletions examples/low_level_api/Miku.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
#!/bin/python
import sys, os
import os
import sys

from common import GptParams
from low_level_api_chat_cpp import LLaMAInteract

Expand Down Expand Up @@ -35,7 +37,7 @@ def env_or_def(env, default):
{AI_NAME}: /think I wonder what {USER_NAME} likes to do in his free time? I should ask him about that!
{AI_NAME}: What do you like to do in your free time? ^_^
{USER_NAME}:""" + " ".join(
sys.argv[1:]
sys.argv[1:],
)

print("Loading model...")
Expand Down
8 changes: 5 additions & 3 deletions examples/low_level_api/ReasonAct.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
#!/bin/python
import sys, os, datetime
import os
import sys

from common import GptParams
from low_level_api_chat_cpp import LLaMAInteract

Expand All @@ -12,7 +14,7 @@ def env_or_def(env, default):

MODEL = env_or_def("MODEL", "./models/llama-13B/ggml-model.bin")

prompt = f"""You run in a loop of Thought, Action, Observation.
prompt = """You run in a loop of Thought, Action, Observation.
At the end of the loop either Answer or restate your Thought and Action.
Use Thought to describe your thoughts about the question you have been asked.
Use Action to run one of these actions available to you:
Expand All @@ -30,7 +32,7 @@ def env_or_def(env, default):
Thought: Do I need to use an action? No, I know the answer
Answer: Paris is the capital of France
Question:""" + " ".join(
sys.argv[1:]
sys.argv[1:],
)

print("Loading model...")
Expand Down
Loading
Loading