You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've an idea where text entered as command mode would do special things. The Command mode would intercept user input before it reached the model and would keep track of sessions for recall and saving.
Example commands triggered by the phase "Command:"
Command:list models. (Result is ls of the current model directory.)
Command:load model groovy
Result is if only one groovy model then (loading ggml-gpt4all-j-v1.3-groovy. bin),
if more than one groovy model show a numbered list of the models with option to select one or cancel.
Command:Write to file myfile.txt would write the last output to a file named myfile.txt
Command:Read myfile.txt would read myfile.txt as the next prompt.
Command:Read lora filename
Command:Write lora filename
Command:Write myfile.py would scan the previous output for python code, and write it out as a python file.
Command:Write myfile.cpp would scan the previous output for c++ code, and write it out as a c++ file.
The idea can be further added by piping commands from other processes.
All this would lead to llama.cpp being able to be controlled externally and have a method of saving/loading output
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I've an idea where text entered as command mode would do special things. The Command mode would intercept user input before it reached the model and would keep track of sessions for recall and saving.
Example commands triggered by the phase "Command:"
Command:list models. (Result is ls of the current model directory.)
Command:load model groovy
Result is if only one groovy model then (loading ggml-gpt4all-j-v1.3-groovy. bin),
if more than one groovy model show a numbered list of the models with option to select one or cancel.
Command:Write to file myfile.txt would write the last output to a file named myfile.txt
Command:Read myfile.txt would read myfile.txt as the next prompt.
Command:Read lora filename
Command:Write lora filename
Command:Write myfile.py would scan the previous output for python code, and write it out as a python file.
Command:Write myfile.cpp would scan the previous output for c++ code, and write it out as a c++ file.
The idea can be further added by piping commands from other processes.
All this would lead to llama.cpp being able to be controlled externally and have a method of saving/loading output
Beta Was this translation helpful? Give feedback.
All reactions