GPTifier is a command-line tool designed to interact seamlessly with OpenAI's API. I designed this program
to "look and feel" similar to git
. I wrote this program in C++ to enhance performance and responsiveness,
resulting in a faster and more efficient user experience. This program is tested on Ubuntu/Debian and macOS.
Ensure you possess a valid OpenAI API key. Set it as an environment variable:
export OPENAI_API_KEY="<your-api-key>"
Set an additional administrative key as an environment variable for running administrative commands:
export OPENAI_ADMIN_KEY="<your-admin-key>"
This program requires CMake, {fmt} and libcurl. These can be installed as follows:
apt install cmake libfmt-dev libcurl4-openssl-dev
brew install cmake fmt
# libcurl usually comes bundled with macOS
This program should work on other Unix-like systems (i.e. other Linux distributions) however I do not extensively test these.
Compile the binary by executing the make
target:
make
The binary will be installed into the directory specified by CMake's install() function. To clean up generated artifacts:
make clean
This project uses toml++ and json. These header files will be downloaded from their respective repositories by default. To instead specify paths to external header files, the project can instead be compiled with definitions, for example:
# i.e. use a custom toml.hpp under /tmp
cmake -DUSE_SYSTEM_TOMLPLUSPLUS=ON -DTOMLPLUSPLUS_HPP=/tmp/toml.hpp -S GPTifier -B /tmp/build && make -j12 -C /tmp/build install
And:
# i.e. use a custom json.hpp under /tmp
cmake -DUSE_SYSTEM_NLOHMANN_JSON=ON -DNLOHMANN_JSON_HPP=/tmp/json.hpp -S GPTifier -B /tmp/build && make -j12 -C /tmp/build install
This project requires a specific "project directory" (~/.gptifier
). Set it up by running:
./setup
The setup script generates a configuration file at ~/.gptifier/gptifier.toml
. Open this file and adjust the
configurations accordingly.
Next, start the program:
gpt run
The program will initiate an interactive session if the configuration file is set up correctly. You may get some variation of:
-bash: gpt: command not found
If so, try running gpt
in a new terminal window.
The run
command allows you to query OpenAI models like GPT-4. To start an interactive session, enter:
gpt run
You'll see a prompt where you can type your query:
------------------------------------------------------------------------------------------
Input: What is 3 + 5?
The program processes the request and returns the answer:
...
Results: 3 + 5 equals 8.
------------------------------------------------------------------------------------------
Export:
> Write reply to file? [y/n]:
You will be asked if you want to save the response to a file. If you choose y, the output will be saved:
...
> Writing reply to file /home/<your-username>/.gptifier/completions.gpt
------------------------------------------------------------------------------------------
Any new responses will append to this file.
To specify a model for chat completion, use the -m
or --model
option. For example, to use GPT-4:
gpt run --model gpt-4 --prompt "What is 3 + 5?"
Tip
To see all available models, use the models command.
For multiline prompts, create a file named Inputfile
in your working directory. GPTifier will automatically
read from it. Alternatively, use the -r
or --read-from-file
option to specify a custom file.
The short
command is almost identical to the run command, but this command returns
a chat completion under the following conditions:
- Threading is disabled; that is, no timer will run in the background to time the round trip
- Verbosity is disabled; either the raw chat completion or an error will be printed to the console
- Text colorization is disabled; this is to prevent ANSI escape code artifact clutter
An example follows:
gpt short "What is 2 + 2?"
Which will print out:
2 + 2 equals 4.
Tip
Use this command if running GPTifier via something like vim
's system()
function
The embed
command converts input text into a vector representation. To embed text, execute the following:
gpt embed
You will then be prompted with:
------------------------------------------------------------------------------------------
Input text to embed:
Enter the text you wish to embed:
------------------------------------------------------------------------------------------
Input text to embed: Convert me to a vector!
Press Enter to proceed. The program will generate the embedding, and the results will be saved to a
JSON file located at ~/.gptifier/embeddings.gpt
.
For large blocks of text, you can read from a file:
gpt embed -r my_text.txt -o my_embedding.json # and export embedding to a custom file!
This command returns a list of currently available models. Simply run:
gpt models
Which will return:
------------------------------------------------------------------------------------------
Model ID Owner Creation time
------------------------------------------------------------------------------------------
dall-e-3 system 2023-10-31 20:46:29
whisper-1 openai-internal 2023-02-27 21:13:04
davinci-002 system 2023-08-21 16:11:41
... ... ...
User models (say fine-tuned models) can be selectively listed by passing the -u
or --user
flag.
This command is used to manage files uploaded to OpenAI.
To list the uploaded files, use:
gpt files
# or
gpt files list
To delete one or more uploaded files, use:
gpt files delete <file-id>
You can obtain the file ID by running the list subcommand.
The fine-tune
command is used for managing fine-tuning operations.
-
Create a dataset: Begin by creating a dataset. Refer to Preparing your dataset for detailed instructions.
-
Upload the dataset:
gpt fine-tune upload-file jessica_training.jsonl
Here,
jessica_training.jsonl
is the name of your dataset file. Upon successful upload, you should see a confirmation message similar to the following:Success! Uploaded file: jessica_training.jsonl With ID: file-6Vf...8t7
-
Create a fine-tuning job:
gpt fine-tune create-job --file-id=file-6Vf...8t7 --model=gpt-4o-mini-2024-07-18
-
Check the status of the job:
gpt fine-tune list-jobs
-
Delete unneeded files: If you no longer need the training file, you can delete it from the OpenAI servers:
gpt files delete file-6Vf...8t7
-
Delete unneeded models: If the fine-tuned model is no longer required, you can delete it using:
gpt fine-tune delete-model <model-id>
To find the model ID, run:
gpt models -u
Note
This command has been deprecated.
Note
This command has been deprecated in favor of a standalone solution. See FuncGraft for more information.
The img
command allows users to generate PNG images according to instructions provided in a text file. At
present time, this command only supports the use of dall-e-3
for image generation. To generate an image,
run:
gpt img /tmp/prompt.txt # prompt.txt contains a description of the image
Note
The commands in this section assume that a valid OPENAI_ADMIN_KEY
is set as an environment variable.
The costs
command can be used to determine overall monetary usage on OpenAI resources over a specified
number of days for an organization. For example:
gpt costs --days=5
Will return the usage per day over the past 5 days and the overall usage over the 5 days.
In the run command section, we discussed how completions can be optionally exported to
~/.gptifier/completions.gpt
. If you wish to integrate access to these completions into your vim
workflow,
you can do so by first adding the function below to your ~/.vimrc
file:
function OpenGPTifierResults()
let l:results_file = expand('~') . '/.gptifier/completions.gpt'
if filereadable(l:results_file)
execute 'vs' . l:results_file
else
echoerr l:results_file . ' does not exist'
endif
endfunction
This function checks if the results file exists, and if so, opens it in a vertical split within vim
. Define
a custom command in your ~/.vimrc
:
" Open GPTifier results file
command G :call OpenGPTifierResults()
With this command configured, you can use :G
in vim
to open the ~/.gptifier/completions.gpt
file in a
separate vertical split. This setup allows for easy access and selective copying of saved OpenAI completions
into your code or text files.
First, verify that the gpt
command in your $PATH
is the GPTifier binary and not an alias for another
application:
gpt -h
If confirmed, proceed to remove the binary with the following command:
rm $(which gpt)
Before deletion, check the ~/.gptifier
directory for any files you wish to retain, such as completions or
configurations. Once reviewed, remove the directory:
rm -r ~/.gptifier
This project is licensed under the MIT License - see the LICENSE file for details.