pal is an AI assistant for terminals. The philosophy here is to feel like a classic shell utility with a bit of magic.
For now, fish and zsh are well supported on Linux. I hear macOS works fine too, but I don't personally test on Mac. All bug reports and feature requests are welcome.
Perhaps unsurprisingly, xkcd has elucidated the core situation inspiring this software:
pal helps diffuse the bomb that nukes our focus when leaving the shell to surf for answers on the web.
It's a single binary that you can just download into any location on your $PATH:
wget https://github.com/scottyeager/Pal/releases/latest/download/pal-linux-amd64 -O /usr/local/bin/pal
chmod +x /usr/local/bin/palwget https://github.com/scottyeager/Pal/releases/latest/download/pal-darwin-arm64 -O /usr/local/bin/pal
chmod +x /usr/local/bin/palIf you have an ARM64 based Linux machine or an Intel Mac, see the releases page to find your binary link.
To conveniently install autocompletions and the abbreviation feature:
# fish
pal --fish-config >> ~/.config/fish/config.fish# zsh (autocomplete is an optional feature in zsh--see details below)
pal --zsh-config >> ~/.zshrcStart a new shell or source your config file from an existing shell to activate the features.
More information about these features and how to install them individually can be found at the relevent docs pages: autocompletion and abbreviations.
Note that pal will store some files on your computer in the following location:
- If
XDG_CONFIG_HOMEis set, then:$XDG_CONFIG_HOME/pal_helper - Otherwise:
~/.config/pal_helper
See "Config path philosophy" if you have questions about this
You will need to provide an API key for an LLM provider. Several providers listed below have a free tier.
The free tiers sometimes require that you agree to collection and use of the data you submit. Those providers also have paid plans without the data collection requirement. See the links for details.
Supported providers:
- DeepSeek
- Anthropic
- OpenAI
- Hugging Face Inference API (free with no data collection, but slow)
- Mistral (free with data collection)
- Google (free with data collection)
- OpenWebUI (self hosted models via Ollama, see guide)
- Any OpenAI API compatible provider (via manual config)
For interactive configuration, run:
pal /config
# Config saved successfully at ~/.config/pal_helper/config.yamlAbbreviations are an optional feature of pal that are highly recommended. When they are enabled, you can autofill the contents of the suggestions from the last pal invocation like this:
pal1 # Hit space and first suggestion will be filled
pal2 # EtcBoth fish and zsh are supported for abbreviations. If you followed the quickstart, abbreviations will be available in every new shell or after sourcing the shell config file. For more info, see abbreviations.
Since pal is built with Cobra, it's able to generate autocompletions for a variety of shells automatically. Currently only the fish and zsh completions are exposed.
If you followed the quickstart, then you've already installed the autocompletions. These instructions are for installing the autocompletions separately from the abbreviations feature.
To activate autocompletions add the following to your ~/.config/fish/config.fish:
pal --fish-completion | sourceIn your ~/.zshrc:
# Make sure these lines are already present somewhere
autoload -Uz compinit
compinit
# Add this line to load pal's completions
source <(pal --zsh-completion)Pal provides a few commands for working with LLMs in your shell.
The default pal mode accepts a task description or question about how to accomplish something with the shell:
pal Set a static IP for eth0
pal asks the model to provide a short list of possible commands. If it does, they will be shown.
If you have abbreviations enabled, you can expand the suggestions:
pal1 # Hit space to expandSometimes a refusal message might be shown if the model can't or won't provide a command suggestion. You can try again or switch to /ask mode to get more information.
/ask mode can be used to pass general queries through to the model, without an expectation that it will suggest shell commands in response.
pal /ask Why is the sky blueUsing /file optimizes the output for redirecting directly into a file. That means that markdown formatting is always disabled. Since LLMs will often wrap the output with triple backticks, even when asked not to, pal will attempt to remove these as well.
pal /file write a Dockerfile that installs nginx into an Alpine base image > DockerfileBy default, each command invocation creates a fresh thread of conversation with the LLM. If you want to continue chatting with previous context included, there are two ways to do it.
- Use the
/chatcommand. This will invoke the same command used previously, but with the conversation history included. It's useful for refining an initial prompt or asking follow up questions.
pal /ask "what does tar stand for?"
# "tar" stands for "tape archive." ...
pal /chat tell me more about these tapes- Add the
-c/--chatflag to a command. While changing commands mid thread is possible, the current implementation retains the original system prompt and the LLM can have trouble changing course even if directed. Further experimentation is needed here. Ideally, one could hash out details via some/askcommands and then use/fileto output an artifact at the end.
The /commit command is used to stage changes in Git repos and automatically generate commit messages:
pal /commit
It works like this:
- Any modified files are
git added (new files must be added manually) - Diffs for the current commit and ten previous commit messages for context are sent to the LLM
- The suggested commit message is opened for review and editing if needed
It's possible to abort the commit by deleting the message and saving before exiting the editor.
In shells like fish and zsh, the ? is reserved for globbing, and this will cause problems if you try to use it without quoting or escaping. Thankfully, it doesn't really matter if you just omit the question mark when asking a question to an LLM. Same goes for appostrophes, which are used for quoting by shells too.
pal /ask whats the reason LLMs dont need punctuation marks to understand meIf necessary, you can pass special characters to pal by quoting them:
pal /ask what does this do: 'ls *.log'As of fish version 4, the question mark will no longer be used as a glob character by default. You can enable this behavior in earlier versions of fish like this:
set -U fish_features qmark-noglob
# Takes effect in new shellsAnything passed to pal on stdin will be included with the prompt. That means you can do this:
cat README.md | pal /ask please summarizeOf course, that could also be written like this:
pal /ask please summarize $(cat README.md)By redirecting stderr, error messages can be sent to pal:
apt install ping
# There's no package named "ping" so this is an error
# Zsh and Bash
apt install ping |& pal
# Fish
apt install ping &| palIn this case, the error message is enough for the model to suggest the correct install command. You can also provide additional instructions or context as usual:
docker ps | pal how can I print the first four characters of the container ids onlyThe /models command can be used to view and select from configured models:
pal /models
To select a model by entering its name (with autcompletion), use /model:
pal /model mistral/codestral-latest
With no argument /model prints the currently selected model.
For providers added through interactive config, a default set of models will be included. Depending on the provider, additional models may be available that could be added by editing the config file directly. You can also remove models you don't use so they won't show up in model selection list.
In the context of LLMs, temperature refers to the amount of randomness introduced when generating responses. With temperature of 0, responses are deterministic. With temperature of 2, you are working with an artist.
By default, pal uses a hopefully sensible hard coded temperature for the task at hand. You can override the temperature for any command that interacts with the AI backend like this:
pal -t 2 /ask write a poem
pal --temperature 0 /commit
If you want to set the temperature for command suggestions, use the /cmd command explicitly:
pal -t 2 /cmd show me a crazy command
Without a slash command specified, pal will ignore the temperature flag and it will get treated as input for the AI.
I tried being more specific about this in the past, but things move fast in this space so I will try to give some general info instead.
The command completion function of pal works well with pretty much any reasonably intelligent model. Since very few tokens are outputted, this mode tends to be very inexpensive, even with more expensive models.
One point to consider is that fast responses are very nice when you are just trying to get something done and need the right command form. Choosing models that are fast but not the most intelligent is probably a good trade off to make.
There are other terminal based AI projects, so why build another one? The short answer is that none quite provided the experience I wanted for this particular use case.
One category of in shell assistants drop you into a whole new shell enviroment. I like my shell and I don't want to mess with it too much.
Another category leans a bit more toward a TUI, showing a menu of results on the screen to choose from, for example. The experince I desire is closer to a classic shell utility—like an ls that lists ideas for the next command to run, instead of listing files.
Finally, I was so inspired when I saw how good and cheap DeepSeek v3 was that I just had to build something with it. Times are a changing, so let's have some fun :)
First of all, I don't like apps cluttering my home directory with a ~/.app folder to hold their config and files. So that's out.
On Linux, using ~/.config and XDG_CONFIG_HOME is well accepted, for config files. Then we also have ~/.local/share, etc. After initially dabbling in a more complicated arrangement using multiple dirs, I'm putting everything under config now to keep things simple.
A good bit of ink has been spilled about where CLI apps on MacOS ought to store their config files. I find the arguments for ~/.app and ~/.config/app to be compelling.
When it comes to the question of XDG and MacOS, my approach is pragmatic. If XDG shouldn't be used on MacOS for some reason, then I'd have to choose a different environment variable name to serve the same purpose. In the interest of simplicity, XDG_CONFIG_HOME is just what pal uses on all platforms.

