Skip to content

Commit

Permalink
Update llm_loader.py
Browse files Browse the repository at this point in the history
  • Loading branch information
rounak610 authored Jan 18, 2024
1 parent f80831b commit 8027b54
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions superagi/helper/llm_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ def model(self):
if self._model is None:
try:
self._model = Llama(
model_path="/app/local_model_path", n_ctx=self.context_length, n_gpu_layers=get_config('GPU_LAYERS', '-1'))
model_path="/app/local_model_path", n_ctx=self.context_length, n_gpu_layers=int(get_config('GPU_LAYERS', '-1')))
except Exception as e:
logger.error(e)
return self._model
Expand All @@ -35,4 +35,4 @@ def grammar(self):
"superagi/llms/grammar/json.gbnf")
except Exception as e:
logger.error(e)
return self._grammar
return self._grammar

Check warning on line 38 in superagi/helper/llm_loader.py

View check run for this annotation

Codecov / codecov/patch

superagi/helper/llm_loader.py#L38

Added line #L38 was not covered by tests

0 comments on commit 8027b54

Please sign in to comment.